Governing AI For Humanity

Download as pdf or txt
Download as pdf or txt
You are on page 1of 101

September 2024

GOVERNING
AI FOR
HUMANITY
Governing AI
for Humanity

Final Report
About the High-level Advisory Body
on Artificial Intelligence
The multi-stakeholder High-level Advisory Body on Artificial Intelligence, initially
proposed in 2020 as part of the United Nations Secretary-General’s Roadmap
for Digital Cooperation (A/74/821), was formed in October 2023 to undertake
analysis and advance recommendations for the international governance of
artificial intelligence.

The members of the Advisory Body have participated in their personal capacity,
not as representatives of their respective organizations. This report represents
a majority consensus; no member is expected to endorse every single point
contained in this document. The members affirm their broad, but not unilateral,
agreement with its findings and recommendations. The language included
in this report does not imply institutional endorsement by the members’
respective organizations.

4 Governing AI for Humanity


Table of Contents

About the High-level Advisory Body on Artificial Intelligence 4


Executive summary 7
1. The need for global governance 7
2. Global AI governance gaps 8
3. Enhancing global cooperation 9
A. Common understanding 9
B. Common ground 11
C. Common benefits 14
D. Coherent effort 19
E. Reflections on institutional models 21
4. A call to action 21
1. Introduction 23
A. Opportunities and enablers 24
B. Key enablers for harnessing AI for humanity 28
C. Governance as a key enabler 28
D. Risks and challenges 28
E. Risks of AI 28
F. Challenges to be addressed 33
2. The need for global governance 37
A. Guiding principles and functions for international governance of AI 38
B. Emerging international AI governance landscape 40

Final Report 5
3. Global AI governance gaps 42
A. Representation gaps 42
B. Coordination gaps 44
C. Implementation gaps 45
4. Enhancing global cooperation 47
A. Common understanding 48
International scientific panel on AI 48
B. Common ground 52
Policy dialogue on AI governance 52
AI standards exchange 55
C. Common benefits 58
Capacity development network 64
Global fund for AI 65
Global AI data framework 67
D. Coherent effort 70
AI office in the United Nations Secretariat 70
E. Reflections on institutional models 73
An international AI agency? 73
5. Conclusion: a call to action 77
Annexes 79
Annex A: Members of the High-level Advisory Body on Artificial Intelligence 79
Annex B: Terms of reference of the High-level Advisory Body on Artificial Intelligence 80
Annex C: List of consultation engagements in 2024 81
Annex D: List of “deep dives” 82
Annex E: Risk Global Pulse Check responses 83
Annex F: Opportunity scan responses 93
Annex G: List of abbreviations 99

6 Governing AI for Humanity


Executive summary
i Artificial intelligence (AI) is transforming our world. vi As noted in our interim report,1 AI governance is
This suite of technologies offers tremendous crucial – not merely to address the challenges
potential for good, from opening new areas of and risks, but also to ensure that we harness AI’s
scientific inquiry and optimizing energy grids, potential in ways that leave no one behind.
to improving public health and agriculture and

1. The need for global


promoting broader progress on the Sustainable
Development Goals (SDGs).

ii Left ungoverned, however, AI’s opportunities may


governance
not manifest or be distributed equitably. Widening
vii The imperative of global governance, in particular,
digital divides could limit the benefits of AI to a
is irrefutable. AI’s raw materials, from critical
handful of States, companies and individuals.
minerals to training data, are globally sourced.
Missed uses – failing to take advantage of and
General-purpose AI, deployed across borders,
share AI-related benefits because of lack of trust
spawns manifold applications globally. The
or missing enablers such as capacity gaps and
accelerating development of AI concentrates power
ineffective governance – could limit the opportunity
and wealth on a global scale, with geopolitical and
envelope.
geoeconomic implications.

iii AI also brings other risks. AI bias and surveillance


viii Moreover, no one currently understands all of AI’s
are joined by newer concerns, such as the
inner workings enough to fully control its outputs
confabulations (or “hallucinations”) of large
or predict its evolution. Nor are decision makers
language models, AI-enhanced creation and
held accountable for developing, deploying or
dissemination of disinformation, risks to peace
using systems they do not understand. Meanwhile,
and security, and the energy consumption of AI
negative spillovers and downstream impacts
systems at a time of climate crisis.
resulting from such decisions are also likely to be
global.
iv Fast, opaque and autonomous AI systems
challenge traditional regulatory systems, while
ix The development, deployment and use of such
ever-more-powerful systems could upend the
a technology cannot be left to the whims of
world of work. Autonomous weapons and public
markets alone. National governments and regional
security uses of AI raise serious legal, security and
organizations will be crucial, but the very nature of
humanitarian questions.
the technology itself – transboundary in structure
and application – necessitates a global approach.
v There is, today, a global governance deficit
Governance can also be a key enabler for AI
with respect to AI. Despite much discussion of
innovation for the SDGs globally.
ethics and principles, the patchwork of norms
and institutions is still nascent and full of gaps.
x AI, therefore, presents challenges and opportunities
Accountability is often notable for its absence,
that require a holistic, global approach cutting
including for deploying non-explainable AI systems
transversally across political, economic, social,
that impact others. Compliance often rests on
ethical, human rights, technical, environmental
voluntarism; practice belies rhetoric.

1 See https://un.org/ai-advisory-body.
Final Report 7
and other domains. Such an approach can turn a xv Equity demands that more voices play meaningful
patchwork of evolving initiatives into a coherent, roles in decisions about how to govern technology
interoperable whole, grounded in international law that affects us. The concentration of decision-
and the SDGs, adaptable across contexts and over making in the AI technology sector cannot be
time. justified; we must also recognize that historically
many communities have been entirely excluded
xi In our interim report, we outlined principles2 that from AI governance conversations that impact
should guide the formation of new international them.
AI governance institutions. These principles
acknowledge that AI governance does not take xvi AI governance regimes must also span the globe to
place in a vacuum, that international law, especially be effective — effective in averting “AI arms races”
international human rights law, applies in relation or a “race to the bottom” on safety and rights, in
to AI. detecting and responding to incidents emanating
from decisions along AI’s life cycle which span

2. Global AI governance
multiple jurisdictions, in spurring learning, in
encouraging interoperability, and in sharing AI’s

gaps benefits. The technology is borderless and, as it


spreads, the illusion that any one State or group of
States could (or should) control it will diminish.
xii There is no shortage of documents and dialogues
focused on AI governance. Hundreds of guides,
xvii Coordination gaps between initiatives and
frameworks and principles have been adopted by
institutions risk splitting the world into
governments, companies and consortiums, and
disconnected and incompatible AI governance
regional and international organizations.
regimes. Coordination is also lacking within the
United Nations system. Although many United
xiii Yet, none of them can be truly global in reach
Nations entities touch on AI governance, their
and comprehensive in coverage. This leads to
specific mandates mean that none does so in a
problems of representation, coordination and
comprehensive manner.
implementation.

xviii However, representation and coordination are not


xiv In terms of representation, whole parts of the world
enough. Accountability requires implementation
have been left out of international AI governance
so that commitments to global AI governance
conversations. Figure (a) shows seven prominent,
translate to tangible outcomes in practice, including
non-United Nations AI initiatives.3 Seven countries
on capacity development and support to small
are parties to all the sampled AI governance
and medium enterprises, so that opportunities are
efforts, whereas 118 countries are parties to none
shared. Much of this will take place at the national
(primarily in the global South).
and regional levels, but more is also needed
globally to address risks and harness benefits.

2 Guiding principle 1: AI should be governed inclusively, by and for the benefit of all; guiding principle 2: AI must be governed in the public interest; guiding
principle 3: AI governance should be built in step with data governance and the promotion of data commons; guiding principle 4: AI governance must be
universal, networked and rooted in adaptive multi-stakeholder collaboration; guiding principle 5: AI governance should be anchored in the Charter of the United
Nations, international human rights law and other agreed international commitments, such as the SDGs.
3 Excluding the United Nations Educational, Scientific and Cultural Organization (UNESCO) Recommendation on the Ethics of Artificial Intelligence (2021) and
the two General Assembly resolutions on AI in 2024: “Seizing the opportunities of safe, secure and trustworthy artificial intelligence systems for sustainable
development” (78/265) and “Enhancing international cooperation on capacity-building of artificial intelligence” (78/311).

8 Governing AI for Humanity


Figure (a): Representation in seven non-United Nations international AI
governance initiatives
Sample: OECD AI Principles (2019), G20 AI principles (2019), Council of Europe AI Convention INTERREGIONAL ONLY,
drafting group (2022–2024), GPAI Ministerial Declaration (2022), G7 Ministers’ Statement (2023), EXCLUDES REGIONAL
Bletchley Declaration (2023) and Seoul Ministerial Declaration (2024).

7/7 7 Canada, France,


Germany, Italy,
6/7 2 Japan, United Countries not involved, by
118 countries
Kingdom and regional grouping:
are
5/7 5 United States are
Parties* to all parties* to
none of the WEOG 0 of 29 countries
sampled initiatives
4/7 7 sampled AI
/ instruments
governance EEG 1 of 23 countries
3/7 10 initiatives /
instruments LAC 25 of 33 countries
2/7 23
APG 44 of 54 countries
1/7 21
AG 48 of 54 countries
0/7 118

* Per endorsement of relevant intergovernmental issuances. Countries are not considered involved in a plurilateral initiative solely because of membership in the European Union or
the African Union. Abbreviations: AG, African Group; APG, Asia and the Pacific Group; EEG, Eastern European Group; G20, Group of 20; G7, Group of Seven; GPAI, Global Partnership
on Artificial Intelligence; LAC, Latin America and the Caribbean; OECD, Organisation for Economic Co-operation and Development; WEOG, Western European and Others Group.

3. Enhancing global
United Nations Secretariat, close to the Secretary-
General, working as the “glue” to unite the initiatives

cooperation proposed here efficiently and sustainably.

xix Our recommendations advance a holistic vision for A. Common understanding


a globally networked, agile and flexible approach
to governing AI for humanity, encompassing xxi A global approach to governing AI starts with
common understanding, common ground and a common understanding of its capabilities,
common benefits. Only such an inclusive and opportunities, risks and uncertainties. There is a
comprehensive approach to AI governance can need for timely, impartial and reliable scientific
address the multifaceted and evolving challenges knowledge and information about AI so that
and opportunities AI presents on a global scale, Member States can build a shared foundational
promoting international stability and equitable understanding worldwide, and to balance
development. information asymmetries between companies
housing expensive AI labs and the rest of the world
xx Guided by principles established in our interim (including via information-sharing between AI
report, our proposals seek to fill gaps and bring companies and the broader AI community).
coherence to the fast-emerging ecosystem
of international AI governance responses and xxii Pooling scientific knowledge is most efficient
initiatives, helping to avoid fragmentation and at the global level, enabling joint investment in a
missed opportunities. To support these measures global public good, and public interest collaboration
efficiently and to partner effectively with other across otherwise fragmented and duplicative
institutions, we propose a light, agile structure as efforts.
an expression of coherent effort: an AI office in the

Final Report 9
International scientific panel on AI xxv Risk assessments could also draw on the work of
other AI research initiatives, with the United Nations
offering a uniquely trusted “safe harbour” for
xxiii Learning from precedents such as the
Intergovernmental Panel on Climate Change (IPCC) researchers to exchange ideas on the “state of the
and the United Nations Scientific Committee on art”. By pooling knowledge across silos in countries
the Effects of Atomic Radiation, an international, or companies that may not otherwise engage or be
multidisciplinary scientific panel on AI could collate included, a United Nations-hosted panel can help to
and catalyse leading-edge research to inform rectify misperceptions and bolster trust globally.
scientists, policymakers, Member States and other
stakeholders seeking scientific perspectives on AI xxvi Such a panel should operate independently, with
support from a cross-United Nations system
technology or its applications from an impartial,
team drawn from the below-proposed AI office
credible source.
and relevant United Nations agencies, such as
the International Telecommunication Union (ITU)
xxiv A scientific panel under the auspices of the United
Nations could source expertise on AI-related and the United Nations Educational, Scientific and
opportunities. This might include facilitating “deep Cultural Organization (UNESCO). It should partner
dives” into applied domains of the SDGs, such as with research efforts led by other international
health care, energy, education, finance, agriculture, institutions, such as the Organisation for Economic
climate, trade and employment. Co-operation and Development (OECD) and the

1
Global Partnership on Artificial Intelligence.

Recommendation 1
An international scientific panel on AI

We recommend the creation of an independent international scientific panel on AI, made up


of diverse multidisciplinary experts in the field serving in their personal capacity on a voluntary
basis. Supported by the proposed United Nations AI office and other relevant United Nations
agencies, partnering with other relevant international organizations, its mandate would
include:

a) Issuing an annual report surveying AI-related capabilities, opportunities, risks and


uncertainties, identifying areas of scientific consensus on technology trends and areas
where additional research is needed;

b) Producing quarterly thematic research digests on areas in which AI could help to


achieve the SDGs, focusing on areas of public interest which may be under-served; and

c) Issuing ad hoc reports on emerging issues, in particular the emergence of new risks or
significant gaps in the governance landscape.

10 Governing AI for Humanity


xxx
B. Common ground Combined with capacity development (see
recommendations 4 and 5), such inclusive dialogue
on governance approaches can help States and
xxvii Alongside a common understanding of AI, common
companies to update their regulatory approaches
ground is needed to establish interoperable
and methodologies to respond to accelerating AI.
governance approaches anchored in global norms
Connections to the international scientific panel
and principles in the interests of all countries. This
would enhance that dynamic, comparable to the
is required at the global level to avert regulatory
relationship between IPCC and the United Nations
races to the bottom while reducing regulatory
Climate Change Conference.
friction across borders; to maximize learning and
technical interoperability; and to respond effectively
xxxi A policy dialogue could begin on the margins of
to challenges arising from the transboundary
existing meetings in New York (such as the General
character of AI.
Assembly4) and in Geneva. Twice-yearly meetings
could focus more on opportunities across diverse
Policy dialogue on AI governance sectors in one meeting, and more on risks in the
other meeting.5 Moving forward, a gathering like
xxviii An inclusive policy forum is needed so that all this would be an appropriate forum for sharing
Member States, drawing on the expertise of information about AI incidents, such as those
stakeholders, can share best practices that are that stretch or exceed the capacities of existing
based on human rights and foster development, agencies.
that foster interoperable governance approaches
and that account for transboundary challenges that xxxii One portion of each dialogue session might focus
warrant further policy consideration. This does not on national approaches led by Member States, with
mean global governance of all aspects of AI, but it a second portion sourcing expertise and inputs
can set the framework for international cooperation from key stakeholders – in particular, technology
and better align industry and national efforts with companies and civil society representatives.
global norms and principles. In addition to the formal dialogue sessions,
multi-stakeholder engagement on AI policy
xxix Institutionalizing such multi-stakeholder exchange could leverage other existing, more specialized
under the auspices of the United Nations can mechanisms, such as the ITU AI for Good meeting,
provide a reliably inclusive home for discussing the annual Internet Governance Forum meeting, the
emerging governance practices and appropriate UNESCO Global Forum on AI Ethics and the United
policy responses. By edging beyond comfort Nations Conference on Trade and Development
zones, dialogue between non-likeminded countries, (UNCTAD) eWeek.
and between States and stakeholders, can
catalyse learning and lay foundations for greater
cooperation, such as on safety standards and
rights, and for times of global crisis. A United
Nations setting is essential to anchoring this effort
in the widest possible set of shared norms.

4 Analogous to the high-level political forum in the context of the SDGs that takes place under the auspices of the Economic and Social Council.
5 Relevant parts of the United Nations system could be engaged to highlight opportunities and risks, including ITU on AI standards; ITU, the United Nations
Conference on Trade and Development (UNCTAD), the United Nations Development Programme (UNDP) and the Development Coordination Office on AI
applications for the SDGs; UNESCO on ethics and governance capacity; the Office of the United Nations High Commissioner for Human Rights (OHCHR) on
human rights accountability based on existing norms and mechanisms; the Office for Disarmament Affairs on regulating AI in military systems; UNDP on
support to national capacity for development; the Internet Governance Forum for multi-stakeholder engagement and dialogue; the World Intellectual Property
Organization (WIPO), the International Labour Organization (ILO), the World Health Organization (WHO), the Food and Agriculture Organization of the United
Nations (FAO), the World Food Programme, the United Nations High Commissioner for Refugees (UNHCR), UNESCO, the United Nations Children’s Fund, the
World Meteorological Organization and others on sectoral applications and governance.

Final Report 11
2
Recommendation 2
Policy dialogue on AI governance

We recommend the launch of a twice-yearly intergovernmental and multi-stakeholder policy


dialogue on AI governance on the margins of existing meetings at the United Nations. Its
purpose would be to:

a) Share best practices on AI governance that foster development while furthering respect,
protection and fulfilment of all human rights, including pursuing opportunities as well as
managing risks;

b) Promote common understandings on the implementation of AI governance


measures by private and public sector developers and users to enhance international
interoperability of AI governance;

c) Share voluntarily significant AI incidents that stretched or exceeded the capacity of State
agencies to respond; and

d) Discuss reports of the international scientific panel on AI, as appropriate.

AI standards exchange were adopted for narrow technical or internal


validation purposes, and those that are intended
to incorporate broader ethical principles. We now
xxxiii When AI systems were first explored, few
standards existed to help to navigate or measure have an emerging set of standards that are not
this new frontier. More recently, there has been a grounded in a common understanding of meaning
proliferation of standards. Figure (b) illustrates the or are divorced from the values that they were
increasing number of standards adopted by ITU, intended to uphold.
the International Organization for Standardization
(ISO), the International Electrotechnical xxxv Drawing on the expertise of the international
scientific panel and incorporating members from
Commission (IEC) and the Institute of Electrical and
the various national and international entities that
Electronics Engineers (IEEE).
have contributed to standard-setting, as well as
representatives from technology companies and
xxxiv There is no common language among these
standards bodies, and many terms routinely used civil society, the United Nations system could serve
with respect to AI – fairness, safety, transparency as a clearing house for AI standards that would
– do not have agreed definitions. There are apply globally.
also disconnects between those standards that

12 Governing AI for Humanity


Figure (b): Number of standards related to AI
120 117

110 9
IEEE
101
100 ISO/IEC 8
ITU 29
90
Further ISO and IEEE standards under development 22
80

70

60 58
3
50
16
40 79
32 71
30 2
10
20 16 39
6
10 6 20
3
3 10
0 1 2 3
2018 2019 2020 2021 2022 2023 2024

3
(Jan.–Jun.)
Sources: IEEE, ISO/IEC, ITU, World Standards Cooperation (based on June 2023 mapping, extended through inclusion of standards related to AI).

Recommendation 3
AI standards exchange

We recommend the creation of an AI standards exchange, bringing together representatives


from national and international standard-development organizations, technology companies,
civil society and representatives from the international scientific panel. It would be tasked
with:

a) Developing and maintaining a register of definitions and applicable standards for


measuring and evaluating AI systems;

b) Debating and evaluating the standards and the processes for creating them; and

c) Identifying gaps where new standards are needed.

Final Report 13
C. Common benefits developing country.6 It is unrealistic to promise
access to compute that even the wealthiest
countries and companies struggle to acquire.
xxxvi The 2030 Agenda for Sustainable Development,
Rather, we seek to put a floor under the AI divide
with its 17 SDGs, can give clarity of purpose
for those unable to secure needed enablers via
to the development, deployment and uses of
other means, including by supporting initiatives
AI, bending the arc of investments towards
towards distributed and federated AI development
global development challenges. Without a
models.
comprehensive and inclusive approach to AI
governance, the potential of AI to contribute
xli Turning to data, it is common to speak of misuse
positively to the SDGs could be missed, and
of data in the context of AI (such as infringements
its deployment could inadvertently reinforce or
on privacy) or missed uses of data (failing to
exacerbate disparities and biases.
exploit existing data sets). However, a related
problem is missing data, which includes the
xxxvii AI is no panacea for sustainable development
large portions of the globe that are data poor.
challenges; it is one component within a broader
Failure to reflect the world’s linguistic and cultural
set of solutions. To truly unlock AI’s potential
diversity has been linked to bias in AI systems,
to address societal challenges, collaboration
but may also be a missed opportunity for those
among governments, academia, industry and civil
communities to access AI’s benefits.
society is crucial, so that AI-enabled solutions are
inclusive and equitable.
xlii A set of shared resources – including open
models – is needed to support inclusive and
xxxviii Much of this depends on access to talent,
effective participation by all Member States in the
computational power (or “compute”) and data, in
AI ecosystem, and here global approaches have
ways that help cultural and linguistic diversity to
distinct advantages.
flourish. Basic infrastructure and the resources to
maintain it are also pre-requisites.
Capacity development network
xxxix Regarding talent, not every society needs cadres
of computer scientists for building their own xliii Growing public and private demand for human
models. However, whether technology is bought, and other AI capacity coincides with emergent
borrowed or built, a baseline socio-technical national, regional and public-private AI centres
capacity is needed to understand the capabilities of excellence that have international capacity
and limitations of AI, and harness AI-enabled use development roles. A global network can serve
cases appropriately while addressing context- as a matching platform that expands the range of
specific risks. possible partnering and enhances interoperability
of capacity-building approaches.
xl Compute is one of the biggest barriers to entry in
the field of AI. Of the top 100 high-performance xliv From the Millennium Development Goals to the
computing clusters in the world capable of SDGs, the United Nations has long embraced
training large AI models, not one is hosted in a developing the capacities of individuals and
institutions.7 A network of institutions, affiliated

6 Proxy indicator since most high-performance computing clusters do not have graphics processing units (GPUs) and are of limited use for advanced AI.
7 Through the work of UNESCO, WIPO and others, the United Nations has helped to uphold the rich diversity of cultures and knowledge-making traditions
across the globe. The United Nations University has long had a commitment to capacity-building through higher education and research, and the United
Nations Institute for Training and Research has helped to train officials in domains critical to sustainable development. The UNESCO Readiness Assessment
Methodology is a key tool to support Member States in their implementation of the UNESCO Recommendation on the Ethics of Artificial Intelligence.
Other examples include the WHO Academy in Lyon, France, the UNCTAD Virtual Institute, the United Nations Disarmament Fellowship run by the Office for
Disarmament Affairs and the capacity development programmes led by ITU and UNDP.

14 Governing AI for Humanity


with the United Nations, could expand options xlv Such a network would promote an alternative
for countries seeking capacity partnerships. paradigm of AI technology development: bottom-
It could also catalyse new national centres of up, cross-domain, open and collaborative.
excellence to stimulate the development of local National-level efforts could continue to employ
AI innovation ecosystems, following interoperable diagnosis tools, such as the UNESCO AI
approaches aligned with United Nations normative Readiness Assessment Methodology, to help
commitments. to identify gaps at the national level, with the
international network helping to address them.

Recommendation 4
Capacity development network

We recommend the creation of an AI capacity development network to link up a set of


collaborating, United Nations-affiliated capacity development centres making available

4
expertise, compute and AI training data to key actors. The purpose of the network would be to:

a) Catalyse and align regional and global AI capacity efforts by supporting networking
among them;

b) Build AI governance capacity of public officials to foster development while furthering


respect, protection and fulfilment of all human rights;

c) Make available trainers, compute and AI training data across multiple centres to
researchers and social entrepreneurs seeking to apply AI to local public interest use
cases, including via:

i) Protocols to allow cross-disciplinary research teams and entrepreneurs in


compute-scarce settings to access compute made available for training/tuning
and applying their models appropriately to local contexts;

ii) Sandboxes to test potential AI solutions and learn by doing;

iii) A suite of online educational opportunities on AI targeted at university


students, young researchers, social entrepreneurs and public sector officials;
and

iv) A fellowship programme for promising individuals to spend time in academic


institutions or technology companies.

Final Report 15
Global fund for AI xlviii This public interest focus makes the fund
complementary to the proposal for an AI capacity
development network, to which the fund would
xlvi Many countries face fiscal and resource
constraints limiting their ability to use AI also channel resources. The fund would provide
appropriately and effectively. Despite any capacity an independent capacity for monitoring of impact,
development efforts (recommendation 4), some and could source and pool in-kind contributions,
may still be unable to access training, compute, including from private sector entities, to make
models and training data without international available AI-related training programmes, time,
support. Other funding efforts may also not scale compute, models and curated data sets at lower-
without it. than-market cost. In this manner, we ensure that
vast swathes of the world are not left behind and
xlvii Our intention in proposing a fund is not to are instead empowered to harness AI for the SDGs
guarantee access to advanced compute resources in different contexts.
and capabilities. The answer may not always
be more compute. We also need better ways to xlix It is in everyone’s interest to ensure that there
connect talent, compute and data. The fund’s is cooperation in the digital world as in the
purpose would be to address the underlying physical world. Analogies can be made to efforts
capacity and collaboration gaps for those unable to to combat climate change, where the costs of
access requisite enablers so that: transition, mitigation or adaptation do not fall
a. Countries in need can access AI enablers, evenly, and international assistance is essential
putting a floor under the AI divide; to help resource-constrained countries so that
b. Collaborating on AI capacity development they can join the global effort to tackle a planetary
leads to habits of cooperation and mitigates challenge.
geopolitical competition;
c. Countries with divergent regulatory approaches
have incentives to develop common templates
for governing data, models and applications for
societal-level challenges related to the SDGs
and scientific breakthroughs.

16 Governing AI for Humanity


5
Recommendation 5
Global fund for AI

We recommend the creation of a global fund for AI to put a floor under the AI divide. Managed
by an independent governance structure, the fund would receive financial and in-kind
contributions from public and private sources and disburse them, including via the capacity
development network, to facilitate access to AI enablers to catalyse local empowerment for
the SDGs, including:

a) Shared computing resources for model training and fine-tuning by AI developers from
countries without adequate local capacity or the means to procure it;

b) Sandboxes and benchmarking and testing tools to mainstream best practices in safe
and trustworthy model development and data governance;

c) Governance, safety and interoperability solutions with global applicability;

d) Data sets and research into how data and models could be combined for SDG-related
projects; and

e) A repository of AI models and curated data sets for the SDGs.

Global AI data framework AI training data. This aim motivates our proposal
for a global AI data framework.

l Access to AI training data, via market or other


mechanisms, is a critical enabler for flourishing lii Such a framework would not create new data-
related rights. Rather, it would address issues of
local AI ecosystems — particularly in countries,
availability, interoperability and use of AI training
communities, regions and demographic groups
data. It would help to build common understanding
with “missing” data (see the section on “common
on how to align different national and regional
benefits” above).
data protection frameworks. It could also promote
flourishing local AI ecosystems supporting cultural
li Only global collective action can incentivize
interoperability, stewardship, privacy preservation, and linguistic diversity, as well as limiting further
empowerment and rights enhancement in ways economic concentration.
that promote a “race to the top” across jurisdictions
towards protection of human rights and other liii These measures could be complemented by
promoting data commons and provisions for
agreed commitments, data availability and fair
hosting data trusts in areas relevant to the SDGs,
compensation to data subjects in the governance
based on templates for agreements to hold and
of the collection, creation, use and monetization of

Final Report 17
share data in a fair, safe and equitable manner. The AI ethics and governance. This is analogous to
development of these templates and the actual the role of the United Nations Commission on
storage and analysis of data held in commons International Trade Law in advancing international
or in trusts could be supported by the proposed trade by developing legal and non-legal cross-
capacity development network and global fund for border frameworks.
AI (recommendations 4 and 5).
lv Similarly, the Commission on Science and
liv The United Nations is uniquely positioned to Technology for Development and the Statistical
support the establishment of global principles Commission have on their agenda data for
and practical arrangements for AI training development and data on the SDGs. There are
data governance and use, in line with agreed also important issues of content, copyright and
international commitments on human rights, protection of indigenous knowledge and cultural
intellectual property and sustainable development, expression being considered by the World
building on years of work by the data community Intellectual Property Organization (WIPO).
and integrating it with recent developments on

Recommendation 6

6
Global AI data framework

We recommend the creation of a global AI data framework, developed through a process


initiated by a relevant agency such as the United Nations Commission on International Trade
Law and informed by the work of other international organizations, for:

a) Outlining data-related definitions and principles for global governance of AI training data,
including as distilled from existing best practices, and to promote cultural and linguistic
diversity;

b) Establishing common standards around AI training data provenance and use for
transparent and rights-based accountability across jurisdictions; and

c) Instituting market-shaping data stewardship and exchange mechanisms for enabling


flourishing local AI ecosystems globally, such as:

i) Data trusts;

ii) Well-governed global marketplaces for exchange of anonymized data for


training AI models; and

iii) Model agreements for facilitating international data access and global
interoperability, potentially as techno-legal protocols to the framework.

18 Governing AI for Humanity


D. Coherent effort AI office in the United Nations
Secretariat
lvi The above proposals seek to address the
representation, coordination and implementation lix We, therefore, propose a light touch mechanism
gaps identified in the emerging international AI to act as the “glue” that supports and catalyses
governance regime. These gaps can be addressed the proposals in this report, including through
through partnerships and collaboration with partnerships, while also enabling the United
existing institutions and mechanisms to promote Nations system to speak with one voice in the
a common understanding, common ground and evolving AI governance ecosystem.
common benefits.
lx This small, agile capacity, in the form of an AI office
lvii Nevertheless, without a dedicated focal point in the within the United Nations Secretariat, would report
United Nations to support and enable coordination to the Secretary-General, conferring the benefit
among these and other efforts, the world will of connections throughout the United Nations
lack the inclusively networked, agile and coherent system, without being tied to one part of it. That
approach required for effective and equitable is important because of the uncertain future of AI
governance of AI as a transboundary, fast-changing and the strong likelihood that it will permeate all
and general-purpose technology. aspects of human endeavour.

lviii The patchwork of norms and institutions outlined lxi Such a body should be agile, champion inclusion
under the section “Global AI governance gaps” and partner rapidly to accelerate coordination and
above, reflect widespread recognition that implementation – drawing as a first priority on
governance of AI is a global necessity. The existing resources and functions within the United
unevenness of that response demands some Nations system. The focus should be on civilian
measure of coherent effort. applications of AI.

Figure (c): Proposed role of the United Nations in the international AI


governance ecosystem
INDICATIVE, NOT EXHAUSTIVE

Common understanding Common ground Common benefits

AI Council of
GPAI OECD
summits Europe

National &
National & Group of Group of regional
regional 20 Seven Initiatives
Initiatives

Regional
SDOs
United Nations organizations AI data
engagement
framework

International Capacity
Governance Standards Global fund
scientific development
dialogue exchange for AI
panel network

United Nations as enabling connector

Abbreviations: GPAI, Global Partnership on Artificial Intelligence; OECD, Organisation for Economic Co-operation and Development;
SDOs, standards development organizations.

Final Report 19
lxii It could be staffed in part by United Nations for fostering common understanding, common
personnel seconded from specialized agencies ground and common benefits in the international AI
and other parts of the United Nations system, such governance ecosystem.
as ITU, UNESCO, the Office of the United Nations
High Commissioner for Human Rights (OHCHR), lxiii Recommendation 7 is made on the basis of a
UNCTAD, the United Nations University and the clear-eyed assessment as to where the United
United Nations Development Programme (UNDP). Nations can add value, including where it can lead,
It should engage multiple stakeholders, including where it can aid coordination and where it should
companies, civil society and academia, and work step aside. It also brings the benefits of existing
in partnership with leading organizations outside institutional arrangements, including pre-negotiated
of the United Nations (see fig. (c)). This would funding and administrative processes that are well
position the United Nations to enable connections established and understood.

7
Recommendation 7
AI office within the Secretariat

We recommend the creation of an AI office within the Secretariat, reporting to the Secretary-
General. It should be light and agile in organization, drawing, wherever possible, on relevant
existing United Nations entities. Acting as the “glue” that supports and catalyses the
proposals in this report, partnering and interfacing with other processes and institutions, the
office’s mandate would include:

a) Providing support for the proposed international scientific panel, policy dialogue,
standards exchange, capacity development network and, to the extent required, the
global fund and global AI data framework;

b) Engaging in outreach to diverse stakeholders, including technology companies, civil


society and academia, on emerging AI issues; and

c) Advising the Secretary-General on matters related to AI, coordinating with other relevant
parts of the United Nations system to offer a whole-of-United Nations response.

20 Governing AI for Humanity


lxviii
E. Reflections on institutional AI may similarly rise to a level that requires more
resources and more authority than is proposed
models in the above-mentioned recommendations,
into harder functions of norm elaboration,
lxiv Discussions about AI often resolve into extremes. implementation, monitoring, verification and
In our consultations around the world, we engaged validation, enforcement, accountability, remedies
with those who see a future of boundless goods for harm and emergency responses. Reflecting on
provided by ever-cheaper, ever-more-helpful AI such institutional models, therefore, is prudent. The
systems. We also spoke with those wary of darker final section of this report seeks to contribute to
futures, of division and unemployment, and even that effort.
extinction.8

lxv We do not know whether the utopian or dystopian 4. A call to action


future is more likely. Equally, we are mindful that
the technology may go in a direction that does lxix We remain optimistic about the future with AI and
away with this duality. This report focuses on its positive potential. That optimism depends,
the near-term opportunities and risks, based on however, on realism about the risks and the
science and grounded in fact. inadequacy of structures and incentives currently
in place. The technology is too important, and the
lxvi The seven recommendations outlined above offer stakes are too high, to rely only on market forces
our best hope for reaping the benefits of AI, while and a fragmented patchwork of national and
minimizing and mitigating the risks, as AI continues multilateral action.
evolving. We are also mindful of the practical
challenges to international institution-building lxx The United Nations can be the vehicle for a new
on a larger scale. This is why we are proposing a social contract for AI that ensures global buy-
networked institutional approach, with light and in for a governance regime which protects and
agile support. If or when risks become more acute empowers us all. Such a social contract will ensure
and the stakes for opportunities escalate, such that opportunities are fairly distributed, and the
calculations may change. risks are not loaded on to the most vulnerable – or
passed on to future generations, as we have seen,
lxvii The world wars led to the modern international tragically, with climate change.
system; the development of ever-more-powerful
chemical, biological and nuclear weapons led lxxi As a group and as individuals from across many
to regimes limiting their spread and promoting fields of expertise, organizations and parts of the
peaceful uses of the underlying technologies. world, we look forward to continuing this crucial
Evolving understanding of our common humanity conversation. Together with the many others we
led to the modern human rights system and our have connected with on this journey, and the global
ongoing commitment to the SDGs for all. Climate community they represent, we hope that this report
change evolved from a niche concern to a global contributes to our combined efforts to govern AI
challenge. for humanity.

8 See https://safe.ai/work/statement-on-ai-risk.

Final Report 21
Figure (d): High-level Advisory Body on Artificial Intelligence at its meeting in
Singapore, 29 May 2024

22 Governing AI for Humanity


1. Introduction
1 The Secretary-General’s High-level Advisory 4 This report reaffirms the findings of the Advisory
Body on Artificial Intelligence was formed to Body’s interim report on opportunities and enablers,
analyse and advance recommendations for the risks and challenges; it also reprises the need
international governance of artificial intelligence for global governance of AI and outlines seven
(AI). Our members are diverse by geography and recommendations.
gender, discipline and age; we draw expertise from
governments, civil society, the private sector and 5 These include a scientific panel to promote
academia. Intense and wide-ranging discussions a common understanding of AI capabilities,
have yielded broad agreement (as reflected in our opportunities, risks and uncertainties. Based on
interim report ) that there is a global governance
1 this common understanding, we need mechanisms
deficit with respect to AI. In that report, we to find common ground on how AI should be
articulated guiding principles for that role and governed at the international levels. Achieving that
functions that could be required internationally. depends on regular dialogue and the development of
standards acceptable and applicable to all.
2 Over subsequent months, we benefited from
extensive feedback and consultations. This included 6 The report also makes recommendations on
18 “deep dives” on specific issue areas with more common benefits, intended to ensure that the
than 500 expert participants, more than 250 written benefits of AI are equitably shared, which can
submissions from over 150 organizations and depend on access to models or capabilities such
100 individuals from all regions, an AI risk pulse as talent, computational power (or “compute”)
check with around 350 expert respondents from and data. These include a network for capacity
all regions, an opportunity scan with around 120 development, a global fund for AI and a global AI
expert respondents from all regions, and regular data framework.
consultations with and briefings of Member States,
United Nations entities and other stakeholder groups 7 To enable those efforts, to partner with other
in more than 40 engagements across all regions.2 initiatives and institutions on addressing AI
Members of the Advisory Body have also engaged concerns and opportunities and ensure that the
extensively in forums around the world, held more United Nations system speaks with one voice on AI,
than a hundred virtual discussions and had three we propose the creation of an AI office within the
plenary in-person meetings, in New York, Geneva United Nations Secretariat.
and Singapore.
8 While we have considered the possibility of
3 The present final report, therefore, has many recommending the creation of an international
authors. While it cannot reflect the full richness and agency for AI, we are not recommending this
diversity of views expressed, it shows our shared action currently; yet we acknowledge the need
commitment to ensuring that AI is developed, for governance to keep pace with technological
deployed and used in a manner that benefits all evolution.
of humanity, and ensuring that AI is governed
effectively and inclusively at the international level.

1 See https://un.org/ai-advisory-body.
2 See annex C for an overview of the consultations.

Final Report 23
9 Beyond immediate multilateral debates and by AI’s potential for power and prosperity, at a
processes involving Governments, our report is time of intense geopolitical competition. Many
also intended for civil society and the private sector, societies are still at the margins of AI development,
researchers and concerned people around the world. deployment and use, while a few are gripped by
We are acutely aware that achieving the ambitious excitement mixed with concern at AI’s cross-cutting
goals that we have outlined can only happen with impact.
multisector global participation.
13 Despite the challenges, there is no opt-out. The
10 Overall, we believe that the future of this technology stakes are simply too high for the United Nations,
is still open. This has been corroborated by our its Member States and the wider community whose
deep dive into the direction of technology and the aspirations the United Nations represents. We hope
debate between open and closed approaches to its that this report provides some signposts to help our
development (see box 9). Larger and more powerful concerted efforts to govern AI for humanity.
models developed in fewer and fewer corporations

A. Opportunities and
is one alternative future. Another could be a more
diverse global innovation landscape dominated
by interoperable small to medium-sized AI models
delivering a multitude of societal and economic
enablers
applications. Our recommendations seek to make
14 AI is transforming our world. This suite of
the latter more likely, while also acknowledging the
technologies4 offers tremendous potential for good,
risks.
from opening new areas of scientific inquiry (see
box 1) and optimizing energy grids, to improving
11 From its founding, the United Nations has been
public health or agriculture.5 If realized, the potential
committed to promoting the economic and social
opportunities afforded by the use of AI tools for
advancement of all peoples.3 The Millennium
individuals, sectors of the economy, scientific
Development Goals sought to establish ambitious
research and other domains of public interest could
targets so that economic opportunities are made
play important roles in boosting our economies (see
available to all the world’s people; the Sustainable
box 2), as well as transforming our societies for the
Development Goals (SDGs) then sought to reconcile
better. Public interest AI – such as forecasting of
the need for development with the environmental
and addressing pandemics, floods, wildfires and
constraints of our planet. The expanded
food insecurity – could even help to drive progress
development, deployment and use of AI tools and
on the SDGs.
systems pose the next great challenge to ensuring
that we embrace our digital future together, rather
than widening our digital divide.

12 Inclusive AI governance is, arguably, one of the most


difficult governance challenges the United Nations
will face. There is a mismatch between the dominant
role of the private sector in AI and the Westphalian
system of international politics. States are tempted

3 This included through trade, foreign direct investment and technology transfer as enablers for long-term development.
4 According to the Organisation for Economic Co-operation and Development (OECD), “An AI system is a machine-based system that, for explicit or implicit
objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical
or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment” (see https://oecd.ai/en/wonk/ai-system-
definition-update).
5 We believe, however, that rigorous assessment by domain experts is needed to assess claims of AI’s benefits. Pursuit of AI for good should be based on
scientific evidence and a thorough evaluation of trade-offs and alternatives. In addition to scientific inquiry, the social sciences are also being transformed.

24 Governing AI for Humanity


Box 1: Potential of AI in advancing science
AI could well be the next major leap in scientific advancement, building on the transformative legacy of the
Internet. The World Wide Web facilitated the sharing of vast amounts of experimental data, scientific papers and
documentation among scientists. AI is building on this foundation by enabling the analysis of extensive data sets,
uncovering hidden patterns, building new hypotheses and associations and accelerating the pace of discovery,
including via experiments at scale with automated robotics.

The impact of AI on science spans major disciplines. From biology to physics, and from environmental science
to social sciences, AI is being integrated in research workflows, and is accelerating the production of scientific
knowledge. Some of the claims today might be hyped, while others have been demonstrated, and its long-term
potential appears promising.a

For example, in biology, the 50-year challenge of protein-folding and protein structure prediction has been
addressed with AI. This includes predicting the structure of over 200 million proteins, with the resulting open-
access database being used by over 2 million scientists in over 190 countries at the time of writing, many of them
working on neglected diseases. This has since been extended to life’s other biomolecules, DNA, RNA and ligands
and their interactions.

For Alzheimer’s, Parkinson’s and amyotrophic lateral sclerosis (ALS), experts using AI are identifying disease
biomarkers and predicting treatment responses, significantly improving precision and speed of diagnosis and
treatment development.b Broadly, AI is helping in advance precision medicine (e.g. in neurodegenerative diseases)
by tailoring treatments based on genetic and clinical profiles. AI technology is also helping to accelerate the
discovery and development of new chemical compounds.c

In radio astronomy, the speed and scale of data being collected by modern instruments, such as the Square
Kilometre Array, can overwhelm traditional methods. AI can make a difference, including by helping to select
which part of the data to focus on for novel insights. Through “unsupervised clustering”, AI can pick out patterns
in data without being told what specifically to look for.d Applying AI to social science research could also offer
profound insights into complex human dynamics, enhancing our understanding of societal trends and economic
developments.

In time, by enabling unprecedented levels of interdisciplinarity, AI may be designed and deployed to spawn new
scientific domains, just as bioinformatics and neuroinformatics emerged from the integration of computational
techniques with biological and neurological research. AI’s ability to integrate and analyse diverse data sets from
areas such as climate change, food security and public health could open research avenues that bridge these
traditionally separate fields, if done responsibly.

AI may also enhance the public policy impact of scientific research by allowing for the validation of complex
hypotheses, for example combining climate models with agricultural data to predict food security risks and linking
these insights with public health outcomes. Another prospect is the boosting of citizen science and the leveraging
of local knowledge and data for global challenges.

a See John Jumper and others, “Highly accurate protein structure prediction with AlphaFold”, Nature, vol. 596 (July 2021), pp. 583–589; see also Josh
Abramson and others, “Accurate structure prediction of biomolecular interactions with AlphaFold 3”, Nature, vol. 630, pp. 493–500 (May 2024).
b Isaias Ghebrehiwet and others, “Revolutionizing personalized medicine with generative AI: a systematic review”, Artificial Intelligence Review, vol. 57,
No. 127 (April 2024).
c Amil Merchant and others, “Scaling deep learning for materials discovery”, Nature, vol. 624, pp. 80–85 (November 2023).
d Zack Savitsky, “Astronomers are enlisting AI to prepare for a data downpour”, MIT Technology Review, 20 May 2024.

Final Report 25
Box 2: Economic opportunities of AI
Since the Industrial Revolution, a handful of innovations have dramatically accelerated economic progress. These
earlier “general-purpose technologies” have reshaped multiple sectors and industries. The last major change
came with computers and the digital age. These technologies transformed economies and increased productivity
worldwide, but their full impact took decades to be felt.

Generative AI is breaking the trend of slow adoption. Experts believe its transformative effects will be seen within
this decade. This quick integration means new developments in AI could rapidly reshape industries, change work
processes and increase productivity. The rapid adoption of AI may thus transform our economies and societies in
unprecedented ways.

The economic benefits of AI may be considerable. Although it is difficult to predict all the ramifications of AI on
our complex economies, projections indicate that AI could significantly increase global gross domestic product,
with relevant impacts across almost all sectors. For businesses, especially micro and small and medium-sized
enterprises, AI can offer access to advanced analytics and automation tools, which were previously only available
to larger corporations. The wide applicability of AI suggests that AI could be a general-purpose technology. As
such, AI could enable productivity for individuals, small and large businesses, and other organizations in sectors
as diverse as retail, manufacturing and operations, health care and the public sector, in developed and developing
economies.a They will require broad adoption within and across sectors; application in productivity-enhancing
uses; and AI that makes workers more productive and ushers in new economic activities at scale. They will
also require investment and capital deepening, co-innovations, process and organizational changes, workforce
readiness and enabling policies.

Figure 1: Selected development opportunities and risks from AI in


emerging markets

Opportunities Risks
• New products and business models — • Obsolescence of traditional export-led
including leapfrogging solutions, path to economic growth
solutions for bottom of pyramid
individuals, and easier access • Increased digital and technological
to credit divide

• Automation of core business • Transformation of job requirements


processes — leading to lower and disruption of traditional job
product costs functions
• Human capital development • Privacy, security and public trust
• Innovation in government services

Source: International Finance Corporation.

a James Manyika and Michael Spence, “The coming AI economic revolution: can artificial intelligence reverse the productivity slowdown?”, Foreign
Affairs, 24 October 2023.

26 Governing AI for Humanity


Box 2: Economic opportunities of AI (continued)
Nevertheless, while AI can enhance productivity, boost international trade and increase income, it is also expected
to impact work. Research suggests that AI could be assistive to workers in some cases, and job displacement in
others cases.b Research, including by the International Labour Organization (ILO), suggests that in the foreseeable
future, AI is likely to be more worker-assistive than worker-displacing.c

Research has also shown that when it occurs, job displacement is expected to occur differently in economies at
different stages of development.d While advanced economies are more exposed, they are also better prepared to
harness AI and complement their workforce. Low- and middle-income countries may have fewer capabilities to
leverage this technology. Additionally, the integration of AI in the workforce may disproportionately affect certain
demographics, with women potentially facing a higher risk of job displacement in some sectors.

Without focused and coordinated efforts to close the digital divide, AI’s potential ability to be harnessed in support
of sustainable development and poverty alleviation will not be realized, causing large segments of the global
population to remain disadvantaged in the swiftly changing technological environment, with exacerbation of
existing inequalities.

To successfully integrate AI into the global economy, we need effective governance that manages risks and
ensures fair outcomes. This means among other options creating regulatory sandboxes for testing AI systems,
promoting international cooperation on standards and setting up mechanisms to continuously evaluate AI’s
impact on labour markets and society. Apart from sound national AI strategies and international support, it
specifically requires:
• Skills development: Implementing education and training programmes to develop AI skills across the
workforce, from basic digital literacy to advanced technical expertise, to prepare workers for an AI-
augmented future.
• Digital infrastructure: Significant investment in digital infrastructure, especially in developing countries, to
bridge the AI divide and facilitate widespread AI adoption.
• Workplace integration: Leveraging social dialogue and public-private partnerships for managing AI
integration in the workplace, ensuring worker participation in the process and protecting labour rights.
• Value chain considerations: Ensuring decent work conditions along the entire AI value chain, including
often overlooked areas, such as data annotation and content moderation, for equitable AI development.

b Erik Brynjolfsson and others, “Generative AI at work”, National Bureau of Economic Research, working paper 31161, 2023; see also Shakked Noy
and Whitney Zhang, “Experimental evidence on the productivity effects of generative artificial intelligence”, Science, vol. 381, No. 6654, pp. 187–192
(July 2023).
c Pawel Gmyrek and others, Generative AI and Jobs: A Global Analysis of Potential Effects on Job Quantity and Quality (Geneva: ILO, 2023).
d Mauro Cazzaniga and others, “Gen-AI: artificial intelligence and the future of work”, staff discussion note SDN2024/001 (Washington, D.C.:
International Monetary Fund, 2024).

Final Report 27
18
B. Key enablers for
Challenges to traditional regulatory systems
arise from AI’s speed, opacity and autonomy. AI’s

harnessing AI for humanity accelerating technical development and deployment


also raise the stakes for international governance, its
general-purpose nature having implications across
15 The potential opportunities emerging from the
borders for multiple domains simultaneously.
development and use of AI will not necessarily
be realized or pursued equitably. In May 2024,
an analysis of funding for AI projects to advance E. Risks of AI
progress towards completion of the SDGs found
only 10 per cent of grants allocated had gone to 19 Problems such as bias in AI systems and
organizations based in low- or middle-income invidious AI-enabled surveillance are increasingly
countries; for private capital, the figure was 25 per documented. Other risks are associated with the
cent (over 90 per cent of which in China).6 use of advanced AI, such as the confabulations of
large language models, high resource consumption

C. Governance as a key and risks to peace and security. AI-generated


disinformation threatens democratic institutions.

enabler 20 Putting together a comprehensive list of AI risks


for all time is a fool’s errand, given the ubiquitous
16 Enablers need to be in place globally for the
benefits of AI to be fully realized and accrued and rapidly evolving nature of AI and its uses; we

beyond a few people in a few countries. Ensuring believe that it is more useful to look at risks from

that AI is deployed for the common good, and the perspective of vulnerable communities and the

that its opportunities are distributed equitably, commons (see paras. 26–28 below).

will require governmental and intergovernmental


action to incentivize participation from the private 21 A snapshot of current expert risk perceptions is
illustrated by the results of a horizon-scanning
sector, academia and civil society. Any governance
exercise commissioned for our work (AI Risk Global
framework should shape incentives globally to
Pulse Check; see annex E), a poll which sourced
promote larger and more inclusive objectives and to
perceptions on AI-related trends and risks from
help identify and address trade-offs.
348 AI experts across disciplines and 68 countries
in all regions.7 Overall, 7 in 10 experts polled were
D. Risks and challenges concerned or very concerned that harms (existing
or new) resulting from AI will become substantially
17 The development, deployment and use of AI bring more serious and/or widespread in the next 18
risks, which can span many areas at the same time. months (see annex E).
We conceptualize AI-related risks in relation to
vulnerabilities; this offers a vulnerability-based way
to define policy agendas.

6 “An analysis of the location of grant recipients’ headquarters from a database of US-majority foundations reveals that from 2018 to 2023, only 10 percent
of grants allocated toward AI initiatives that address one or more of the SDGs went to organizations based in low- or middle-income countries … Analysis of
private capital shows that 36 percent of 9,000 companies addressing SDGs are headquartered in the United States, but these companies received 54 percent of
total funding. We also found that while 20 percent of 9,000 companies addressing SDGs are headquartered in lower- or middle-income countries, they received
a higher proportion (25 percent) of total funding. One reason for this is that Chinese companies receive a high proportion of investment … The remaining
developing countries in the sample received only 3 percent of funding while representing 7 percent of the sample” (Medha Bankhwal and others,
“AI for social good: improving lives and protecting the planet”, McKinsey & Company, May 2024).
7 The invitee list was constructed from the Office of the Secretary-General’s Envoy on Technology (OSET) and the Advisory Body’s networks, including
participants in deep dives. Additional experts were regularly invited during the fielding period to improve representation. The final n=348 represents a strong,
balanced global sample of respondents with relevant expertise to provide an informed opinion on AI risks (see annex E for the methodology).

28 Governing AI for Humanity


Figure 2: Experts’ levels of concern about AI risks across multiple domains
“Please rate your current level of concern that (existing or new) harms 1 Not concerned 3 Somewhat concerned 5 Very concerned
resulting from AI will become substantially more serious and/or 2 Slightly concerned 4 Concerned
widespread in the next 18 months for each area.” (n = 348)

j. Damage to information integrity 24 15% 27% 51%


(e.g. mis/disinformation, impersonation)

b. Intentional use of AI in armed conflict by state actors


16 18 29 46
(e.g. autonomous weapons)

h. Inequalities arising from differential control and ownership over AI technologies 2 7 17 26 48


(e.g. increased concentration of wealth / power among individuals, corporations and other institutions)

a. Intentional malicious use of AI by non-state actors 26 20 30 42


(e.g. crime, terrorism)

l. Discrimination / disenfranchisement, particularly against marginalized communities 3 12 18 29 38


(e.g. use of biased AIs in hiring or criminal justice decisions)

c. Intentional use of AI by state actors that harms individuals 2 11 23 32 33


(e.g. mass surveillance)

m. Human rights violations 3 13 23 24 37

k. Inaccurate information / analysis provided by AI in critical fields 3 12 27 26 32


(e.g. misdiagnoses by medical AI)

d. Intentional use of AI by corporate actors that harms customers / users


4 13 23 32 29
(e.g. hyper-targeted advertising, AI-driven addictive products)

i. Violation of intellectual property rights 14 26 27 27


6
(e.g. profiting from protected intellectual assets without compensating the rights holder)

n. Environmental harms
8 12 25 29 26
(e.g. accelerating energy consumption and carbon emissions)

g. Harms to labour from adoption of AI


7 15 26 30 22
(e.g. disruption of labour markets, increased unemployment)

e. Unintended autonomous actions by AI systems [Excl. autonomous weapons]


14 18 26 26 16
(e.g. loss of human control over autonomous agents, deceptive / manipulative agentic actions)

f. Unintended multi-agent interactions among AI systems


13 22 28 27 11
(e.g. flash economic crashes, trading AIs engaging in collusive signaling)

Note: Excludes “Don’t know” / “No opinion” and blank responses.


Source: OSET AI Risk Pulse Check, 13-25 May 2024.

22 From a list of example AI-related risk areas,8 23 In all but two example risk areas, most AI experts
a plurality of experts were concerned or very polled were concerned or very concerned about
concerned about harms related to: harms materializing. Although fewer than half
a. Societal implications of AI: 78 per cent of experts expressed such concern regarding
regarding damage to information integrity unintended harms from AI [questions e and f], 1 in 6
[question j], 74 per cent regarding inequalities of those who were very concerned about unintended
such as concentration of wealth and power AI harms mentioned that they expected agentic
in a few hands [question l] and 67 per cent systems to have some of the most surprising or
regarding discrimination / disenfranchisement, significant impacts on AI-related risks by 2025.9
particularly among marginalized communities
[question i]; 24 Expert perceptions varied, including by region and
b. Intentional use of AI that harms others: 75 per gender (see annex E for more detailed results).
cent regarding use in armed conflict by State This highlighted the importance of inclusive
actors [question b], 72 per cent regarding representation in exercises concerning definition of
malicious use by non-State actors [question a] shared risks. Despite the variation, the results did
and 65 per cent regarding use by State actors reveal concerns about AI harms over the coming
that harms individuals [question c]. year, highlighting a sense of urgency among
experts to address risks across multiple areas and
vulnerabilities in the near future.

8 Built on the vulnerability-based risk categorization in box 4, an earlier version of which was in our interim report.
9 Question: “What emerging trends today do you think could have the most surprising and/or significant impact on AI-related risks over the next 18 months?”

Final Report 29
25 Moreover, autonomous weapons in armed conflict, 26 Risk management requires going beyond listing or
crime or terrorism, and public-security use of prioritizing risks, however. Framing risks based on
AI in particular, raise serious legal, security and vulnerabilities can shift the focus of policy agendas
humanitarian questions (see box 3). 10
from the “what” of each risk (e.g. “risk to safety”) to
“who” is at risk and “where”, as well as who should
be accountable in each case.

Box 3: AI and national and international security


Many AI technologies are not simply dual-use but inherently “re-purposable”. AI applications for law enforcement
and border controls are growing and raise concerns about due process, surveillance and lack of accountability
regarding States’ commitments to human rights norms, enshrined in the Universal Declaration of Human Rights
and other instruments.

Among the challenges of AI use in the military domain are new arms races, the lowering of the threshold of
conflict, the blurring of lines between war and peace, proliferation to non-State actors and derogation from long-
established principles of international humanitarian law, such as military necessity, distinction, proportionality and
limitation of unnecessary suffering. On legal and moral grounds, kill decisions should not be automated through
AI. States should commit to refraining from deploying and using military applications of AI in armed conflict in
ways that are not in full compliance with international law, including international humanitarian law and human
rights law.

Presently, 120 Member States support a new treaty on autonomous weapons, and both the Secretary-General
and the President of the International Committee of the Red Cross have called for such treaty negotiations to be
completed by 2026. The Advisory Body urges Member States to follow up on this call.

The Advisory Body considers it essential to identify clear red lines delineating unlawful use cases, including
relying on AI to select and engage targets autonomously. Building on existing commitments on weapons reviews
in international humanitarian law, States should require weapons manufacturers through contractual obligations
and other means to conduct legal and technical reviews to prevent unethical design and development of military
applications of AI. States should also develop legal and technical reviews of the use of AI, as well as of weapons
and means of warfare and sharing related best practices.

Furthermore, States should develop common understandings relating to testing, evaluation, verification and
validation mechanisms for AI in the security and military domain. They should cooperate to build capacity
and share knowledge by exchanging good practices and promoting responsible life cycle management of AI
applications in the security and military domain. To prevent acquisition of powerful and potentially autonomous
AI systems by dangerous non-State actors, such as criminal or terrorist groups, States should set up appropriate
controls and processes throughout the life cycle of AI systems, including managing end-of-life cycle processes
(i.e. decommissioning) of military AI applications.

For transparency, “advisory boards” could be set up to provide independent expert advice and scrutiny across the
full life cycle of security and military applications of AI. Industry and other actors should consider mechanisms to
prevent the misuse of AI technology for malicious or unintended military purposes.

10 This list is intended to be illustrative only, touching on only a few of the risks facing individuals and societies.

30 Governing AI for Humanity


27 This is significant, as evolving risks manifest differently an open-ended framework for focusing on those who
for different people and societies. A vulnerability-based could be harmed by AI, which can be a foundation for
approach, also proposed in our interim report, offers dynamic risk management (see box 4).

Box 4: Categorizing AI-related risks based on existing or potential


vulnerability

Individuals
• Human dignity, value or agency (e.g. manipulation, deception, nudging, sentencing, exploitation,
discrimination, equal treatment, prosecution, surveillance, loss of human autonomy and AI-assisted
targeting).
• Physical and mental integrity, health, safety and security (e.g. nudging, loneliness and isolation,
neurotechnology, lethal autonomous weapons, autonomous cars, medical diagnostics, access to health
care, and interaction with chemical, biological, radiological and nuclear systems).
• Life opportunities (e.g. education, jobs and housing).
• (Other) human rights and civil liberties, such as the rights to presumption of innocence (e.g. predictive
policing), the right to a fair trial (e.g. recidivism prediction, culpability, recidivism, prediction and
autonomous trials), freedom of expression and information (e.g. nudging, personalized information, info
bubbles), privacy (e.g. facial recognition technology), and freedom of assembly and movement (e.g.
tracking technology in public spaces).

Politics and society


• Discrimination and unfair treatment of groups, including based on individual or group traits, such as
gender, group isolation and marginalization.
• Differential impact on children, older persons, persons with disabilities and vulnerable groups.
• International and national security (e.g. autonomous weapons, policing and border control vis-à-vis
migrants and refugees, organized crime, terrorism and conflict proliferation and escalation).
• Democracy (e.g. elections and trust).
• Information integrity (e.g. misinformation or disinformation, deepfakes and personalized news).
• Rule of law (e.g. functioning of and trust in institutions, law enforcement and the judiciary).
• Cultural diversity and shifts in human relationships (e.g. homogeneity and fake friends).
• Social cohesion (e.g. filter bubbles, declining trust in institutions, and information sources).
• Values and norms (e.g. ethical, moral, cultural and legal).

Economy
• Power concentration.
• Technological dependency.
• Unequal economic opportunities, market access, resource distribution and allocation.
• Underuse of AI.
• Overuse of AI or “technosolutionism”.
• Stability of financial systems, critical infrastructure and institutions.
• Intellectual property protection.

Environment
• Excessive consumption of energy, water and material resources (including rare minerals and other natural
resources).

Final Report 31
28 The policy-relevance of taking a vulnerability-based vary. The AI Risk Global Pulse Check also asked
lens to AI-related risks is illustrated by examining AI experts which individuals, groups, societies/
governance considerations from the perspective of economies/(eco)systems they were particularly
a particular vulnerable group, such as children (see concerned would be harmed by AI in the next 18
box 5). months. Marginalized communities and the global
South, along with children, women, youths, creatives
29 The individuals, groups or entities of concern and those with jobs susceptible to automation, were
identified via a vulnerability-based framing of AI risks particularly highlighted (see fig. 3).
– and implied policy agendas – can themselves

Box 5: Focusing on children in AI governance


Ensuring that businesses and schools address the needs and rights of children requires a comprehensive
governance approach that focuses on their unique circumstances. Children generate one third of the data and
will grow up to an AI-infused economy and world accustomed to the use of AI. This box summarizes some of the
measures relating to this topic discussed during our deep dives.

Prioritizing children’s rights and voices:


AI governance must recognize children as priority stakeholders, emphasizing their right to develop free from the
addictive effects of technology and their right to disengage from it. Unlike general human-centric approaches,
child-centric governance must consider the long-term impacts on children’s perspectives, self-image, and life
choices and opportunities. Including children in design and governance processes is crucial to ensuring that AI
systems are safe and appropriate for their use.

Research and policy development:


We need extensive research to understand how AI affects children’s social, cognitive and emotional development
over time. This research should inform policy discussions and guide protective measures across countries.

Protection and privacy:


Children should not be used as subjects for AI experimentation. Protecting children’s privacy is paramount. AI
technologies must incorporate stringent data protection protocols and provide age-appropriate content.

Child impact assessments and child appropriate design:


Mandating child impact assessments for AI systems is essential to ensuring their suitability and safety. AI
systems should be designed with children’s needs in mind, incorporating safety and restriction features from the
start. Design choices should involve input from children themselves.

Digital inclusion and equity:


Access to AI should empower children with agency, choices and voice, emphasizing holistic approaches to digital
inclusion. This includes providing AI content in multiple languages and ensuring that it is culturally appropriate for
non-English-speaking children.

International cooperation and standards:


Global interoperability of rules for children’s engagement with AI technologies is needed to protect children across
different educational and developmental environments. Global standards will be essential to address cross-border
data flows and ethical AI use for children.

32 Governing AI for Humanity


Figure 3: Concerns on vulnerability highlighted in the AI Risk Global
Pulse Check
“Are there specific individuals, groups or societies/economies/(eco)systems that you are particularly concerned may INDICATIVE
be harmed by AI over the next 18 months?” [free text response] (n = 188 meaningful responses to this question)

Less educated
LGBT+ Everyone Low-skilled
Indigenous workers

Marginalized
Africans

People in Women Elderly Rural

communities armed conflict


Creatives
Less educated
in AI

People in Jobs susceptible Coders


Youth democratic States
to automation Low-income Activists

Children Global
Workers People who treat AI as
People in Ecosystems a companion
autocratic
States
South
Public
institutions Journalists Sub-Saharan
Africans Informal
Teachers
Persons with workforce
Latin Americans disabilities Small businesses
Early career
workers Minorities Health sector
Migrants Intellectual property holders Students

Note: Keywords tagged for each response by OSET. Showing only keywords identified in 2+ responses. Font size is proportional to number of responses mentioned. For scale, “global
South” was identified by 46 of 188 respondents who provided meaningful responses to this question; “marginalized communities” by 43 of 188.
Source: OSET AI Risk Pulse Check, 13-25 May 2024.

30 These results illustrate the importance of 32 The race to develop and deploy AI systems defies
inclusive representation when reaching common traditional regulatory systems and governance
understandings of AI risks and common ground regimes. Most experts polled for the AI Risk Global
on policy agendas, as per recommendations 1 and Pulse Check expected AI acceleration over the next
2. Without such representation, AI governance 18 months, both in its development (74 per cent)
policy agendas could be framed in ways that miss and adoption and application (89 per cent) (see fig. 4).
the concerns of portions of humanity, who will
nonetheless be affected. 33 As mentioned in paragraph 23, some experts
expect the deployment of agentic systems in 2025.

F. Challenges to be
Moreover, leading technical experts acknowledge
that many AI models remain opaque, with their

addressed outputs not fully predictable or controllable, even as


negative spillovers downstream may impact others
globally.
31 Besides near-future risks and harms, the evolution
of AI development, deployment and uses also poses
34 Increasing reliance on automated decision-making
challenges in the context of prevailing institutions,
and content-creation by opaque algorithms can
which in turn affects strategies for AI governance.
undermine fair treatment and safety. While humans
The technological pace around advanced AI – and
often remain legally accountable for decisions
its general-purpose nature – further tests humanity’s
to automate processes that impact others,
ability to respond in time.
accountability mechanisms may not evolve quickly
enough for such accountability to be given prompt
and meaningful effect.

Final Report 33
Figure 4: Experts’ expectations regarding AI technological development

74% expect pace of technical change to 89% expect pace of adoption & application to
accelerate (30% substantially) accelerate (34% substantially)
“In the next 18 months, compared to the last 3 months, do you “In the next 18 months, compared to the last 3 months, do you
expect the pace of technical change in AI (e.g. development / expect the pace of adoption and application of AI (e.g. new uses of
release of new models) to...” (n = 348) AI in business / government) to...” (n = 348)

89%

74% 5 Substantially accelerate 5 Substantially accelerate


4 Accelerate 34% 4 Accelerate
3 Remain same 3 Remain same
30%
2 Decelerate 2 Decelerate
1 Substantially decelerate 1 Substantially decelerate

55%
44%

10%
21% 0% 0%
No respondents expected
0% 5% deceleration in adoption
and application
Note: Numbers may not add up to 100% owing to rounding. Excludes “Don’t know” / “No opinion” and blank responses.
Source: OSET AI Risk Pulse Check, 13-25 May 2024.

35 A societal risk thus emerges that ever-fewer 38 The pace, breadth and uncertainty of AI’s
individuals end up being held accountable for harms development, deployment and use highlight the
arising from their decisions to automate processes value of a holistic, transversal and agile approach
using AI, even as increasingly powerful systems to AI. Internationally, a holistic perspective needs to
enter the world. This demands agile governance to be mirrored in a networked institutional approach
ensure that accountability mechanisms keep pace to AI governance across sectors and borders, which
with accelerating AI. engages stakeholders without being captured by
them.
36 If the pace of AI development and deployment
challenges existing institutions, so does the breadth. 39 On climate change, the world has come to realize
A general-purpose technology with global reach, only belatedly that a holistic approach to global
advanced AI can be deployed across domains collective action is needed. With AI, there is an
affecting societies in manifold ways, with broad opportunity to do so by design.
policy implications.
40 The above challenges are compounded by
37 The implications and potential impact of AI’s an associated concentration of wealth and
intersection with multiple areas, including finance, decision-making among a handful of private AI
labour markets, education and political systems, developers and deployers, particularly multinational
presage broad consequences that demand a corporations. This raises another question of how
whole-of-society approach (see examples in box stakeholders can be engaged in AI’s governance
6). Existing institutions must mount holistic, cross- without undermining the public interest.
sectoral responses that address AI’s wide-ranging
societal impacts.

34 Governing AI for Humanity


Box 6: AI-related societal impacts
As part of its broader engagement, Advisory Body members consulted with a range of stakeholders to discuss the
implications of AI on society. This box summarizes key concerns and potential initiatives brought forward as part
of deep dives on this topic.

Social, psychological and community impact:

As AI becomes more powerful and widespread, its development, deployment and application will become more
personalized, with the potential to foster alienation and addiction. To some Advisory Body members, AI trained on
an individual’s data, and its consequent role as a primary interlocutor and intermediary, may reflect an inflection
point for human beings – with the potential to create urgent new societal challenges, while exacerbating existing
ones.

For example, future AI systems may be able to generate an endless feed of high-quality video content tailored
to individuals’ personal preferences. Increased social isolation, alienation, mental health issues, loss of human
agency and impacts on emotional intelligence and social development are only a few of the potential outcomes.

These issues are already insufficiently explored by policymakers in the context of technologies such as smart
devices and the Internet; they are almost completely unexplored in the context of AI, with current governance
frameworks prioritizing risks to individuals, rather than society as a whole.

As policymakers consider future responses to AI, they must weigh these factors as well, and develop policies
that promote societal well-being, particularly for youth. Government interventions could foster environments that
prioritize face-to-face interactions between humans, making mental health support more readily available, and
investing more into sports facilities, public libraries and the arts.

Nevertheless, prevention is better than cure: industry developers should design their products without addictive
personalized features, ensure that the products do not damage mental health and promote (rather than
undermine) a sense of shared belonging in society. Tech companies should establish policies to manage societal
risks on an equal basis to other risks as part of efforts to identify and mitigate risks across the entire life cycle of
AI products.

Disinformation and trust:

Deepfakes, voice clones and automated disinformation campaigns pose a specific and serious threat to
democratic institutions and processes such as elections, and to democratic societies and social trust more
generally, including through foreign information manipulation and interference (FIMI). The development of closed
loop information ecosystems, reinforced by AI and leveraging personal data, can have profound effects on
societies, potentially making them more accepting of intolerance and violence towards others.

Protecting the integrity of representative government institutions and processes requires robust verification and
deepfake detection systems, alongside rapid notice and take-down procedures for content that is likely to deceive
in a way that causes harm or societal divisions, or which promotes war propaganda, conflict and hate speech.
Individuals who are not public figures should have protections from others creating deepfakes in their likeness for
fraudulent, defamatory or otherwise abusive purposes. Sexualized deepfakes are a particular concern for women
and girls and may be a form of gender-based violence.

Final Report 35
Box 6: AI-related societal impacts (continued)
Voluntary commitments from private sector players – such as labelling deepfakes or enabling users to flag and
then take down deepfakes made or distributed with malicious intent – are important first steps. However, they do
not sufficiently mitigate societal risks. Instead, a global, multi-stakeholder approach is required, alongside binding
commitments. Common standards for content authentication and digital provenance would allow for a globally
recognized approach to identify synthetic and AI-modified images, videos and audio.

Additionally, real-time knowledge-sharing between public and private actors, based on international standards,
would allow for rapid-response capabilities to immediately take down deceptive content or FIMI before it has
a chance to go viral. Nonetheless, these processes should incorporate safeguards to ensure that they are not
manipulated or abused to abet censorship.

These actions should be accompanied by preventive measures, to increase societal resilience to AI-driven
disinformation and propaganda, such as public awareness campaigns on AI’s potential to undermine information
integrity. Member States should additionally promote media and digital literacy campaigns, support fact-checking
initiatives and invest in capacity-building for the FIMI defender community.

36 Governing AI for Humanity


2. The need for global governance
41 There is, today, a global governance deficit with training data flows and energy sources that lie
respect to AI. Despite much discussion of ethics behind AI’s development and use. Leading AI
and principles, the patchwork of norms, institutions companies often have more direct influence
and initiatives is still nascent and full of gaps. over downstream applications (via upstream risk
Accountability and remedies for harm are often mitigation) than most countries acting alone.
notable primarily for their absence. Compliance
rests on voluntarism. There is a fundamental 44 The development, deployment and use of such
disconnect between high-level rhetoric, the systems a technology cannot be left to the whims of
being developed, deployed and used, and the markets alone. National governments and regional
conditions required for safety and inclusiveness. organizations will be crucial. However, in addition
As we noted in our interim report, AI governance to considerations of equity, access and prevention
is crucial, not merely to address the challenges of and remedies for harm, the very nature of the
and risks, but also to ensure that we harness their technology itself – transboundary in structure and
potential in ways that leave no one behind. 11 application – necessitates a global multisector
approach. Without a globally inclusive framework
42 The imperative of global governance, in particular, that engages stakeholders, and given the
is irrefutable. AI’s raw materials, from critical competitive dynamics at play, both Governments
minerals to training data, are globally sourced. and companies might be tempted to cut corners or
General-purpose AI, deployed across borders, to prioritize self-interest.
spawns manifold applications globally. The
accelerating development of AI concentrates power 45 AI, therefore, presents global challenges and
and wealth on a global scale, with geopolitical opportunities that require a holistic and global
and geoeconomic implications. Moreover, no one approach that cuts transversally across political,
currently understands all of AI’s inner workings economic, social, ethical, human rights, technical,
enough to fully control its outputs or predict its environmental and other domains. Such an
evolution. Nor are decision makers held accountable approach can turn a patchwork of evolving initiatives
for developing, deploying or using systems that they into a coherent, interoperable whole, grounded in
do not understand. Meanwhile, negative spillovers international law and adaptable across contexts and
and downstream impacts resulting from such time.
decisions are also likely to be global.
46 The need for global governance of AI arises at a
43 Despite AI’s global reach, national and regional time of geopolitical and geoeconomic competition
institutional structures and regulations end at for influence and markets. Yet addressing AI’s
physical borders. This reduces the ability of risks while enabling opportunities to be harnessed
any single country to govern the downstream equitably requires concerted global action. A
applications of AI that result in transboundary widening digital divide could limit the benefits of AI
harms, or to address issues along complex cross- to a handful of States and individuals, with risks and
border supply chains of compute infrastructure, harms impacting many, especially vulnerable, groups.

11 See https://un.org/ai-advisory-body.

Final Report 37
48
A. Guiding principles and
Box 7 summarizes the feedback on these principles,
which emphasized the importance of human

functions for international rights and the need for greater clarity on effective
implementation of the guiding principles, including

governance of AI regarding data governance. It challenged us to


address the problem of ensuring that support

47 In our interim report, we outlined five principles that for inclusivity was backed by action, and that
should guide the formation of new international AI marginalized groups would be represented.
governance institutions:
• Guiding principle 1: AI should be governed 49 In our interim report, we also proposed several

inclusively, by and for the benefit of all institutional functions that might be pursued at the

• Guiding principle 2: AI must be governed in the international level (see fig. 5). The feedback largely

public interest confirmed the need for these functions at the global

• Guiding principle 3: AI governance should level, while calling for additional complementary

be built in step with data governance and the functions related to data and AI governance to

promotion of data commons translate guiding principle 3 (AI governance should

• Guiding principle 4: AI governance must be be built in step with data governance and the

universal, networked and rooted in adaptive promotion of data commons) into practice.

multi-stakeholder collaboration
• Guiding principle 5: AI governance should be
anchored in the Charter of the United Nations,
international human rights law and other agreed
international commitments such as the SDGs

Figure 5: AI governance functions proposed at the international level

AI governance functions

7 Norm elaboration, compliance and


accountability

6 Reporting and peer review


Institutional “hardness”

5 International collaboration on data,


compute and talent to solve the SDGs

Facilitation of development and use


4 liability regime, cross-border model
training and testing

3 Mediating standards, safety and risk


management frameworks

2 Interoperability (horizontal) and


alignment (vertical) with norms

1 Horizon-scanning, building scientific


consensus

38 Governing AI for Humanity


Box 7: Feedback on the guiding principles

Emphasis on human rights-based AI governance:

Based on the extensive consultations conducted by the High-level Advisory Body following the publication of its interim
report, guiding principle 5 (AI governance should be anchored in the Charter of the United Nations, international human
rights law and other agreed international commitments) garnered the strongest support across all sectors of stakeholders,
including governments, civil society, the technical community, academia and the private sector. This included respecting,
promoting and fulfilling human rights and prosecuting their violations, as well as General Assembly resolution 78/265 on
seizing the opportunities of safe, secure and trustworthy AI systems for sustainable development, unanimously adopted in
March 2024.

The Advisory Body in its deliberations was convinced that to mitigate the risks and harms of AI, to deal with novel use
cases and to ensure that AI can truly benefit all of humanity and leave no one behind, human rights must be at the centre of
AI governance, ensuring rights-based accountability across jurisdictions. This foundational commitment to human rights is
cross-cutting and applies to all the recommendations made in this final report.

Specific implementation mechanisms and clarity on guidelines:

Many stakeholders emphasized the need for detailed action plans and clear guidelines to ensure effective implementation
of the Advisory Body’s guiding principles for international AI governance. Governmental entities suggested developing
clear recommendations for defining and ensuring the public interest, along with mechanisms for public participation and
oversight. The need for clear policies and leveraging existing regulatory frameworks to maintain competitive and innovative
AI markets was often stressed by private sector entities. Many international organizations and civil society organizations
also called for agile governance systems designed to respond in a timely manner to evolving technologies. Some
specifically requested a new entity with “muscle and teeth”, beyond mere coordination.

Mechanisms to hold key actors responsible:

A common concern was accountability for discriminatory, biased and otherwise harmful AI, with suggestions for
mechanisms to ensure accountability and remedies for harm and address the concentration of technological capacity and
market power. Many organizations highlighted the necessity of addressing unchecked power and ensuring consumer rights
and fair competition. Academic institutions recognized the strengths of the guiding principles in their universality and
inclusivity, but suggested improvements in stakeholder engagement. Private sector actors emphasized responsible use of
AI, along with breaking down barriers to access.

More specific functions on AI data governance:

The absence of data governance systems was mentioned in multiple consultations, with stakeholders indicating that
the United Nations was a natural venue for dialogue on data governance. Governments emphasized the need for robust
data governance frameworks that prioritized privacy, data protection and equitable data use, advocating for international
guidelines to manage data complexities in AI development. The frameworks were requested to be developed through a
transparent and inclusive process, integrating ethical considerations such as consent and privacy.

Academia highlighted that data governance should be dealt with as a priority in the short term. Private sector entities
noted that data governance measures should complement AI governance, emphasizing comprehensive privacy laws and
responsible AI use. International organizations and civil society organizations stressed that governance of AI training data
should protect consumer rights and support fair competition among AI developers via non-exclusive access to AI training
data, underscoring the call for specific and actionable data governance measures. The United Nations was identified as a
key venue for addressing these governance challenges and bridging resource disparities.

Final Report 39
Figure 6: Interregional and regional AI governance initiatives, key milestones,
2019–2024 (H1)

50 Regarding the institutionally “harder” AI governance


functions of monitoring, verification, reporting, B. Emerging international
compliance, accountability stabilization, response
and enforcement, the feedback noted that first,
AI governance landscape
international treaty obligations would be needed,
prior to the institutionalization of such functions, and
53 There is, to be sure, no shortage of documents and
dialogues presently focused on AI governance.
that the case for institutionalizing such functions in
Hundreds of guides, frameworks and principles
governing AI as a technology was not yet made.
have been adopted by governments, companies
and consortiums, and by regional and international
51 Not all functions need to be performed exclusively
organizations. Dozens of forums convene diverse
by the United Nations. However, if the patchwork of
actors, from established intergovernmental
norms and institutions is to be transformed into a
processes and expert bodies, to ad hoc multi-
safety net that promotes and supports sustainable
stakeholder initiatives. These are accompanied by
innovation benefiting all of humanity, then there
existing and emerging regulation at the national and
needs to be a shared understanding of the science
regional levels.
and common ground behind the rules and the
standards by which we assess whether governance
is achieving its objectives.
54 International initiatives by Governments are
proliferating (see fig. 6). These emerging initiatives
increasingly follow a transversal approach to AI
52 During our consultations, we heard calls for a
governance at the international level, consisting
more detailed landscape analysis of existing and
of principles, declarations, statements and other
emerging efforts to govern AI internationally, and of
issuances that address AI holistically, rather than
gaps needing to be filled for the equitable, effective
in specific domains. They have accelerated sharply
and efficient international governance of AI.

40 Governing AI for Humanity


Figure 7: Sources of governance initiatives that focused on AI specifically

NOT EXHAUSTIVE
United States-United AI summits, United AI summits,
Kingdom / FMF, IEC, IEEE,
Interregional New Zealand-United CoE, G7, G20, Nations ISO, ITU, WSC…
international Kingdom / GPAI, OECD…
between regions United States-
Singapore /
United States-EU…

ASEAN, AU,
by governments

CEN-

by companies
Adoption
Adoption

Regional EU, OAS... CENELEC,


international ETSI…
within regions

AI safety Parties / adopters


institutes,
Domestic BSI, SAC, Governments
[ANSI, NIST] (initiatives,
…170+ more agreements)

Bi-/minilateral Plurilateral Universal Industry Companies


(industry
Large-n multilateral standards standards)
Inclusiveness and commitments

Abbreviations: ANSI, American National Standards Institute; ASEAN, Association of Southeast Asian Nations; AU, African Union; BSI, British Standards Institution; CEN, European
Committee for Standardisation; CENELEC, European Committee for Electrotechnical Standardization; CoE, Council of Europe; ETSI, European Telecommunications Standards
Institute; EU, European Union; FMF, Frontier Model Forum; G20, Group of 20; G7, Group of Seven; GPAI, Global Partnership on Artificial Intelligence; IEC, International
Electrotechnical Commission; IEEE, Institute of Electrical and Electronics Engineers; ISO, International Organization for Standardization; ITU, International Telecommunication
Union; NIST, National Institute of Standards and Technology; OAS, Organization of American States; OECD, Organisation for Economic Co-operation and Development; SAC,
Standardization Administration of China; WSC, World Standards Cooperation.

since 2023, spurred by releases of multiple general- 57 Examples of relevant regional and interregional
purpose AI large language models following the plurilateral initiatives include those led by the African
release of ChatGPT in November 2022. Union, various hosts of AI summits, the Association
of Southeast Asian Nations, the Council of Europe,
55 In parallel, industry standards on AI have the European Union, the Group of Seven (G7),
been developed and published for adoption the Group of 20 (G20), the Global Partnership on
internationally. Other multi-stakeholder initiatives Artificial Intelligence, the Organization of American
have also sought to bridge the divide between the States and the Organisation for Economic Co-
public and private sectors, including in discussion operation and Development (OECD), among others.
arenas such as the Internet Governance Forum.
58 Our analysis of current governance arrangements is
56 A survey of some of the sources of AI governance likely to be outdated within months. Nevertheless,
initiatives and industry standards, mapped by it can help to illustrate how current and emerging
geographical range and inclusiveness, is provided in international AI governance initiatives relate to our
figure 7 (in listing this recent work, we acknowledge guiding principles for the formation of new global
many years of efforts by academics, civil society governance institutions for AI, including principle 1
and professional bodies). (AI should be governed inclusively, by and for the
benefit of all).

Final Report 41
3. Global AI governance gaps
59 The multiple national, regional, multi-stakeholder overlapping membership, seven countries are
and other initiatives mentioned above have yielded parties to all of them, whereas fully 118 countries
meaningful gains and informed our work; many are parties to none (primarily in the global South,
of their representatives have contributed to our with uneven representation even of leading AI
deliberations in writing or participated in our nations; see fig. 8).
consultations.
65 Selectivity is understandable at an early stage
60 Nonetheless, beyond a couple of initiatives emerging of governance when there is a degree of
from the United Nations, none of the initiatives
12
experimentation, competition around norms and
can be truly global in reach. These representation diverse levels of comfort with new technologies.
gaps in AI governance at the international level are However, as international AI governance matures,
a problem, because the technology is global and will global representation becomes more important in
be comprehensive in its impact. terms of equity and effectiveness.

61 Separate coordination gaps between initiatives and 66 Besides the non-inclusiveness of existing
institutions risk splitting the world into disconnected efforts, representation gaps also exist in national
and incompatible AI governance regimes. and regional initiatives focused on reaching
common scientific understandings of AI. These
62 Furthermore, implementation and accountability representation gaps may manifest in decision-
gaps reduce the ability of States, the private sector, making processes regarding how assessments are
civil society, academia and the technical community scoped, resourced and conducted.
to translate commitments, however representative,
into tangible outcomes. 67 Equity demands that more voices play meaningful
roles in decisions about how to govern technology

A. Representation gaps
that affects all of us, as well as recognizing that
many communities have historically been excluded
from those conversations. The relative paucity of
63 Our analysis of the various non-United Nations AI topics from the agendas of major initiatives that are
governance initiatives that span regions shows that priorities of certain regions signals an imbalance
most initiatives are not fully representative in their stemming from underrepresentation.13
intergovernmental dimensions.

68 AI governance regimes must span the globe to be


64 Many exclude entire parts of the world. As figure effective – effective in building trust, averting “AI
8 shows, looking at seven non-United Nations arms races” or “races to the bottom” on safety and
plurilateral, interregional AI initiatives with rights, responding effectively to challenges arising

12 The United Nations Educational, Scientific and Cultural Organization (UNESCO) Recommendation on the Ethics of Artificial Intelligence (2021), and two General
Assembly resolutions on AI.
13 For example, governance of AI training data sets, access to computational power, AI capacity development, AI-related risks regarding discrimination of
marginalized groups and use of AI in armed conflict (see annex E for results of the AI Risk Global Pulse Check, which shows different perceptions of risks by
respondents from the Western European and Others Group versus others). Many States and marginalized communities have also been excluded from the
benefits of AI or may disproportionately suffer its harms. Equity demands a diverse and inclusive approach that accounts for the views of all regions and that
spreads opportunities evenly while mitigating risks.

42 Governing AI for Humanity


Figure 8: Representation in seven non-United Nations international AI
governance initiatives

Sample: OECD AI Principles (2019), G20 AI principles (2019), Council of Europe AI Convention INTERREGIONAL ONLY,
drafting group (2022–2024), GPAI Ministerial Declaration (2022), G7 Ministers’ Statement (2023), EXCLUDES REGIONAL
Bletchley Declaration (2023) and Seoul Ministerial Declaration (2024).

7/7 7 Canada, France,


Germany, Italy,
6/7 2 Japan, United Countries not involved, by
118 countries
Kingdom and regional grouping:
are
5/7 5 United States are
Parties* to all parties* to
none of the WEOG 0 of 29 countries
sampled initiatives
4/7 7 sampled AI
/ instruments
governance EEG 1 of 23 countries
3/7 10 initiatives /
instruments LAC 25 of 33 countries
2/7 23
APG 44 of 54 countries
1/7 21
AG 48 of 54 countries
0/7 118

* Per endorsement of relevant intergovernmental issuances. Countries are not considered involved in a plurilateral initiative solely because of membership in the European Union or
the African Union. Abbreviations: AG, African Group; APG, Asia and the Pacific Group; EEG, Eastern European Group; G20, Group of 20; G7, Group of Seven; GPAI, Global Partnership
on Artificial Intelligence; LAC, Latin America and the Caribbean; OECD, Organisation for Economic Co-operation and Development; WEOG, Western European and Others Group.

from the transboundary character of AI, spurring 70 The two General Assembly resolutions on AI
learning, encouraging interoperability and sharing AI adopted in 2024 so far15 signal acknowledgement
benefits. There are, moreover, benefits to including
14
among leading AI nations that representation gaps
diverse views, including un-likeminded views, to need to be addressed regarding international AI
anticipate threats and calibrate responses that are governance, and the United Nations could be the
creative and adaptable. forum to bring the world together in this regard.

69 By limiting the range of countries included 71 The Global Digital Compact in September 2024,
in key agenda-shaping, relationship-building and the World Summit on the Information Society
and information-sharing processes, selective Forum in 2025 offer two additional policy windows
plurilateralism can limit the achievement of its own where a globally representative set of AI governance
goals. These include compatibility of emerging AI processes could be institutionalized to address
governance approaches, global AI safety and shared representation gaps.16
understandings regarding the science of AI at the
global level (see recommendations 1, 2 and 3 on
what makes a global approach particularly effective
here).

14 If and when red lines are established – analogous perhaps to the ban on human cloning – they will only be enforceable if there is global buy-in to the norm, as
well as monitoring compliance. This remains the case despite the fact that, paradoxically, in the current paradigm, while the costs of a given AI system go down,
the costs of advanced AI systems (arguably the most important to control) go up.
15 Resolutions 78/265 (seizing the opportunities of safe, secure and trustworthy artificial intelligence systems for sustainable development) and 78/311
(enhancing international cooperation on capacity-building of artificial intelligence).
16 Various plurilateral initiatives, including the OECD AI Principles, the G7 Hiroshima AI Process and the Council of Europe Framework Convention on Artificial
Intelligence, are open to supporters or adherents beyond original initiating countries. Such openness might not, however, deliver representation and legitimacy
at the speed and breadth required to keep pace with accelerating AI proliferation globally. Meanwhile, representation gaps in international AI governance
processes persist, with decision-making concentrated in the hands of a few countries and companies.

Final Report 43
75
B. Coordination gaps
The level of activity shows the importance of AI
to United Nations programmes. As AI expands to
affect ever-wider aspects of society, there will be
72 The ongoing emergence and evolution of AI growing calls for diverse parts of the United Nations
governance initiatives are not guaranteed to system to act, including through binding norms.
work together effectively for humanity. Instead, It also shows the ad hoc nature of the responses,
coordination gaps have appeared. Effective which have largely developed organically in specific
handshaking between the selective plurilateral domains and without an overarching strategy. The
initiatives (see fig. 8) and other regional initiatives is resulting coordination gaps invite overlaps and
not assured, risking incompatibility between regions. hinder interoperability and impact.

73 Nor are there global mechanisms for all international 76 The number and diversity of approaches are a sign
standards development organizations (see fig. 7), that the United Nations system is responding to
international scientific research initiatives or AI an emerging issue. With proper orchestration, and
capacity-building initiatives to coordinate with each in combination with processes taking a holistic
other, undermining interoperability of approaches approach, these efforts can offer an efficient and
and resulting in fragmentation. The resulting sustainable pathway to inclusive international AI
coordination gaps between various sub-global governance in specific domains. This could enable
initiatives are in some cases best addressed at the meaningful, harmonized and coordinated impacts
global level. on areas such as health, education, technical
standards and ethics, instead of merely contributing
74 A separate set of coordination gaps arise within to the proliferation of initiatives and institutions
the United Nations system, reflected in the array of in this growing field. International law, including
diverse United Nations documents and initiatives international human rights law, provides a shared
in relation to AI. Figure 9 shows 27 United Nations- normative foundation for all AI-related efforts,
related instruments in specific domains that may thereby facilitating coordination and coherence.
apply to AI – 23 of them are binding and will require
interpretation as they pertain to AI. A further 29 77 Although the work of many United Nations entities
domain-level documents from the United Nations touches on AI governance, their specific mandates
and related organizations focus specifically on AI, mean that none does so in a comprehensive
none of which are binding.17 In some cases, these manner; and their designated governmental focal
can address AI risks and harness AI benefits in points are similarly specialized.18 This limits the
specific domains. ability of existing United Nations entities to address

17 A survey conducted by the United Nations Chief Executives Board in February 2024 of 57 United Nations entities reported 50 documents concerning AI
governance; 44 of the 57 entities responded, including the Economic Commission for Latin America and the Caribbean; the Economic and Social Commission
for Asia and the Pacific; the Economic and Social Commission for Western Asia; the Food and Agriculture Organization of the United Nations (FAO); the
International Atomic Energy Agency (IAEA); the International Civil Aviation Organization (ICAO); the International Fund for Agricultural Development; ILO; the
International Monetary Fund; the International Organization for Migration; International Trade Centre; the International Telecommunication Union (ITU); the
United Nations Entity for Gender Equality and the Empowerment of Women (UN-WOMEN); the Joint United Nations Programme on HIV/AIDS (UNAIDS); the
United Nations Conference on Trade and Development (UNCTAD); the Department of Economic and Social Affairs; the Department of Global Communications;
the Executive Office of the Secretary-General; the Office for the Coordination of Humanitarian Affairs; the Office of the United Nations High Commissioner
for Human Rights; the Office of Counter-Terrorism; the Office for Disarmament Affairs; the Office of Information and Communications Technology; OSET; the
United Nations Development Programme (UNDP); the United Nations Office for Disaster Risk Reduction; the United Nations Environment Programme; UNESCO;
the United Nations Framework Convention on Climate Change; the United Nations Population Fund; the United Nations High Commissioner for Refugees
(UNHCR); the United Nations Children’s Fund; the United Nations Interregional Crime and Justice Research Institute; the United Nations Industrial Development
Organization; the United Nations Office on Drugs and Crime/United Nations Office at Vienna; the United Nations Office for Project Services; the United
Nations Relief and Works Agency for Palestine Refugees in the Near East; United Nations University; United Nations Volunteers; the World Trade Organization;
the Universal Postal Union; the World Bank Group; the World Food Programme; the World Health Organization (WHO); and the World Intellectual Property
Organization (WIPO). See “United Nations system white paper on AI governance: an analysis of the UN system’s institutional models, functions, and existing
international normative frameworks applicable to AI governance” (available at https://unsceb.org/united-nations-system-white-paper-ai-governance).
18 For example, ministries of education, science and culture (UNESCO); telecommunication or ICT (ITU); industry (United Nations Industrial Development
Organization); and labour (ILO).

44 Governing AI for Humanity


Figure 9: Selected documents related to AI governance from the United
Nations and related organizations
NOT EXHAUSTIVE
Ethics and policy Human rights Technical standards Communications Drugs and crime

UNESCO OHCHR ITU Department of Global UN Office on Drugs and Crime


• Recommendation on the Ethics of • International Convention on the • AI in Telecom Operations and Communications • Kyoto Declaration
Artificial Intelligence Elimination of All Forms of Racial Management • Developing work on principles on
Discrimination* • AI in Smart Systems and Cities information integrity UNICRI
• International Covenant on Civil and • AI in Network Management and • Policy Framework for Responsible Limits on
WHO
Political Rights* Services AI in Specific Technologies Facial Recognition. Use Case: Law
• Guidance on Ethics & Governance of UNESCO
Artificial Intelligence for Health • International Covenant on Economic, or Applications • Guidelines for the Governance of Enforcement Investigations
Social and Cultural Rights* • Toolkit for Responsible AI Innovation in Law
Digital Platforms
• Convention on the Elimination of All UNDP Enforcement
United Nations Children's Fund Forms of Discrimination against • The Digital Public Goods standard for
(UNICEF) Women* AI systems (developed together with UNOCT
• Policy Guidance on AI for Children • Convention against Torture and Other DPGA) Trade • 8th review of the Global Counter-Terrorism
• The Case for Better Governance of Cruel, Inhuman or Degrading Treatment
Strategy (A/RES/77/298)
Children's Data: A Manifesto or Punishment* ICAO
• Responsible Data for Children WTO
(rd4c.org) ILO
• Chicago Convention annexes* • General Agreement on Trade in Education
Services*
• Worst Forms of Child Labour
• Technical Barriers to Trade*
United Nations Human Settlements Convention, 1999 (No. 182)*
• Information Technology Agreement*
UNESCO
• Occupational Safety and Health Peace and Security • Guidance for generative AI in education and
Programme (UN-HABITAT) Convention, 1981 (No. 155)*
• Trade-related Aspects of Intellectual
research
• Guide to mainstream human rights in Property Rights*
• Promotional Framework for • Draft AI competency frameworks for students
the digital transformation of cities UNODA • Trade Facilitation Agreement
Occupational Safety and Health and teachers
• Policy framework for centering people, Convention, 2006 (No. 187)* • Article 36 of Additional Protocol I to • AI and Digital Transformation Competencies
inclusion, and human rights in smart • Discrimination (Employment and the Geneva Conventions* UNCITRAL for Civil Servants
city development Occupation) Convention, 1958 (No. • Biological Weapons Convention* • Draft provisions on automated
• Chemical Weapons Convention contracting
111)* Other
UN-Women • Workers' Representatives Convention,
• CSw67 (“agreed conclusions”) 1971 (No. 135)*
• Employment Policy Convention, 1964 UN-HABITAT
Intellectual property • AI Risk Assessment Framework
United Nations Population Fund (No. 122)* Health • International guidelines on people-centred
• ILO Code of Practice on the protection
• Programme of action for the smart cities
of workers' personal data
International Conference on Population WHO WIPO
and Development: A population- • Regulatory considerations on artificial • Rome Convention for the Protection
focused human rights-based framework
UNICEF of Performers, Producers of United Nations Industrial Development
intelligence for health
• Convention on the Rights of the Child* Phonograms and Broadcasting Organization
• Generating Evidence for Artificial
Intelligence Based Medical Devices: A Organizations* • The Abu Dhabi Declaration, UNIDO GC.18
Framework for Training Validation and • Berne Convention for the Protection
KEY: of Literary and Artistic Works*
Evaluation United Nations Office for Disaster Risk
Applies to AI • Guidance on Ethics & Governance of • Beijing Treaty on Audiovisual
May apply to AI Artificial Intelligence for Health Performances* Reduction
• Patent Cooperation Treaty* • Sendai Framework for Disaster Risk Reduction
* Binding

Source: “United Nations system white paper on AI governance: an analysis of the UN system’s institutional models, functions, and existing international normative frameworks
applicable to AI governance”, 28 Feb 2024.

the multifaceted implications of AI globally on their and Human Rights. Equally, we would need robust
own. At the national and regional levels, such gaps engagement of civil society and scientific experts
are being addressed by new institutions, such as 19
to keep governments and private companies honest
AI safety institutes or AI offices for an appropriately about their commitments and claims.
transversal approach.
80 Missing enablers for harnessing AI’s benefits for the

C. Implementation gaps
public good within and between countries constitute
a key implementation gap. Many countries have
put in place national strategies to boost AI-related
78 Representation and coordination are not enough, infrastructure and talent, and a few initiatives for
however. Action and follow-up processes are international assistance are emerging.20 However,
required to ensure that commitments to good these are under-networked and under-resourced.
governance translate into tangible outcomes in
practice. More is needed to ensure accountability. 81 At the global level, connecting national and regional
Peer pressure and peer-to-peer learning are two capacity development initiatives, and pooling
elements that can spur accountability. resources to support those countries left out from
such efforts, can help to ensure that no country is
79 Engaging with the private sector will be equally left behind in the sharing of opportunities associated
important for meaningful accountability and remedy with AI. Another key implementation gap is the
for harm. The United Nations has experience of this absence of a dedicated fund for AI capacity-building
in the United Nations Guiding Principles on Business despite the existence of some funding mechanisms
for digital capacity (box 8).

19 Including those set up by Canada, Japan, Singapore, the Republic of Korea, the United Kingdom, the United States and the European Union.
20 National-level efforts could continue to employ diagnosis tools, such as the UNESCO AI Readiness Assessment Methodology to help to identify gaps at the
country level, with the international network helping to address them.

Final Report 45
Box 8: Gaps in global financing of AI capacity
The Advisory Body believes that there are no existing global funds for AI capacity-building with the scale and
mandate to fund the significant investment required to put a floor under the AI divide.

Indicative estimates place the amount needed in the range of $350 million to $1 billion annually,a including in-
kind contributions from the private sector, mandated to target AI capacity across all AI enablers, including talent,
compute, training data, model development and interdisciplinary collaboration for applications. Examples of
existing multilateral funding mechanisms include:

a) Joint SDG Fund

This fund is broad and encompasses every SDG, as well as emergency response. It supports country-level
initiatives for integrated United Nations policy and strategic financing support to countries to advance the SDGs.
The fund helps the United Nations to deliver and catalyse SDG financing and programming. Since 2017, 30
participating United Nations entities have received a total of $223 million. It does not fund national governments,
communities or entities directly, and it does not fund cross-border initiatives.

In 2023, the fund had around 16 donors for a total of $57.7 million, and an estimated $58.8 million in 2024. The
private sector has contributed $83,155 since 2017, and none in 2023 or 2024 to date.

Most of the fund, 60 per cent, go to actions in five SDGs: Goals 2 (zero hunger), 5 (gender equality), 7 (affordable
and clean energy), 9 (industry, innovation and infrastructure) and 17 (partnerships).

The fund’s Policy Digital Transformation stream (launched in 2023) has funded one project of $250,000,
disbursed equally between the International Telecommunication Union (ITU) and the United Nations Development
Programme (UNDP). At the end of financial year 2023, its delivery rate was 2.27 per cent. Digital transformation
activities form a small part of the fund’s activities, and typically in relation to other SDGs (e.g. connectivity and
digital infrastructure to support service delivery, such as in small island developing States).

b) World Bank, Digital Development Partnership

This fund supports countries in developing and implementing the digital transformation with a focus on
broadband infrastructure, access and use, digital public infrastructure and data production, accessibility and use.
By the end of 2022, it had invested $10.7 billion in more than 80 countries.

The partnership includes a cybersecurity associated multi-donor trust fund (Estonia, Germany, Japan and the
Kingdom of the Netherlands) to support national cybersecurity capacity development.

a Less than 1 per cent of estimated annual private sector AI investment in 2023.

46 Governing AI for Humanity


4. Enhancing global cooperation
82 Having outlined the global governance deficit, we bring coherence to the fast-emerging ecosystem
now turn to recommendations to address the priority of international AI governance responses and
gaps for the near term. initiatives, helping to avoid fragmentation and
missed opportunities. To support these measures
83 Our recommendations advance a holistic vision for efficiently and partner effectively with other
a globally networked, agile and flexible approach institutions, we propose a light, agile structure as
to governing AI for humanity, encompassing an expression of coherent effort: an AI office in the
common understanding, common ground and United Nations Secretariat, close to the Secretary-
common benefits to enhance representation, General, working as the “glue” to hold these other
enable coordination and strengthen implementation pieces together.
(see fig. 10). Only such an inclusive and
comprehensive approach to AI governance can 85 The United Nations is far from perfect. Nevertheless,
address the multifaceted and evolving challenges the legitimacy arising from its unique inclusiveness,
and opportunities AI presents on a global scale, coupled with its binding normative foundations in
promoting international stability and equitable international law, including international human
development. rights law, presents hope for governing AI for the
benefit and protection of humanity in a manner that
84 Guided by the principles listed in our interim report is equitable, effective and efficient.21
(see para. 47), our proposals seek to fill gaps and

Figure 10: Overview of recommendations and how they address global AI


governance gaps
Purpose Enhance representation Enable coordination Strengthen implementation

Common understanding
International scientific panel on AI
 
()
Common ground
Policy dialogue on AI governance
AI standards exchange  
Common benefits
Capacity development network
Global fund for AI
Global AI data framework
  
Coherent effort Advising the Secretary-General on matters related to AI, working to promote a coherent voice within the United Nations system,
engaging States and stakeholders, partnering and interfacing with other processes and institutions, and supporting other proposals
AI office within the Secretariat as required.

21 It should also be inclusive and cohesive, and enhance global peace and security.

Final Report 47
A. Common understanding
a. Issuing an annual report surveying AI-
related capabilities, opportunities, risks and
uncertainties, identifying areas of scientific
86 A global approach to governing AI starts with consensus on technology trends and areas
a common understanding of its capabilities, where additional research is needed;
opportunities, risks and uncertainties. b. Producing quarterly thematic research digests
on areas in which AI could help to achieve the
87 The AI field has been evolving quickly, producing an SDGs, focusing on areas of public interest which
overwhelming amount of information and making it may be under-served; and
difficult to decipher hype from reality. This can fuel c. Issuing ad hoc reports on emerging issues,
confusion, forestall common understanding and in particular the emergence of new risks or
advantage major AI companies at the expense of significant gaps in the governance landscape.
policymakers, civil society and the public.
91 There is precedent for such an institution. Some
88 In addition, a dearth of international scientific examples include the United Nations Scientific
collaboration and information exchange can breed Committee on the Effects of Atomic Radiation,
global misperceptions and undermine international the Intergovernmental Science-Policy Platform on
trust. Biodiversity and Ecosystem Services (IPBES), the
Scientific Committee on Antarctic Research, and the
89 There is a need for timely, impartial and reliable
Intergovernmental Panel on Climate Change (IPCC).
scientific knowledge and information about AI
for Member States to build a shared foundational
92 These models are known for their systematic
understanding worldwide, and to balance approaches to complex, pervasive issues affecting
information asymmetries between companies various sectors and global populations. However,
housing expensive AI labs and the rest of the while they can provide inspiration, none is perfectly
world, including via information-sharing between AI suited to assessing AI technology and should not
companies and the broader AI community. be replicated directly. Instead, a tailored approach is
required.
90 This is most efficient at the global level, enabling
joint investment in a global public good and public
93 Learning from such precedents, an independent,
interest collaboration across otherwise fragmented international and multidisciplinary scientific panel on
and duplicative efforts. AI could collate and catalyse leading-edge research
to inform those seeking scientific perspectives on
International scientific panel AI technology or its applications from an impartial,
on AI credible source. An example of one kind of issue
to which the panel could contribute is the ongoing
Recommendation 1: An international scientific debate over open versus closed AI systems,
panel on AI discussed in box 9.

We recommend the creation of an independent 94 A scientific panel under the auspices of the United
international scientific panel on AI, made up of Nations would have a broad focus to cover an
diverse multidisciplinary experts in the field serving inclusive range of priorities holistically. This
in their personal capacity on a voluntary basis. could include sourcing expertise on AI-related
Supported by the proposed United Nations AI opportunities, and facilitating “deep dives” into
office and other relevant United Nations agencies, applied domains of the SDGs, such as health care,
partnering with other relevant international energy, education, finance, agriculture, climate, trade
organizations, its mandate would include: and employment.

48 Governing AI for Humanity


95 Risk assessments could also draw on the work 98 The global reach of networks uniquely accessible
of other AI research initiatives, with the United via the United Nations would enable common
Nations offering a uniquely trusted “safe harbour” understanding across the widest basis, making
for researchers to exchange ideas on the “state of available findings in ways relevant to various
the art”. International law, including human rights socioeconomic and geographical contexts. The
law, would provide a compass for defining pertinent panel can thereby activate the United Nations
risks. By pooling knowledge across silos in countries as a reliable platform for inclusively networked,
or companies that may not otherwise engage or be multidisciplinary stakeholder understanding.
included, a United Nations-hosted panel can help to
rectify misperceptions and bolster trust globally. 99 The panel could be established for an initial period
of 3–5 years (with extension subject to review by the
96 Such a scientific panel would not necessarily Secretary-General), and could function according to
conduct its own research but be a catalyst for the following basis:
networked action. It could aggregate, distil and
22
a. The panel could start with 30–50 members
translate developments in AI for its audiences, appointed through a mix of Member State-
highlighting potential use cases. It would reduce and self-nomination, comparable to how the
information asymmetry, help to avoid misdirected Advisory Body was established. It should focus
investments and keep information flowing across a on scientific expertise across disciplines, and
global network of experts. would need to ensure diverse representation
by region and gender, as well as reflecting the
97 The panel would have three key audiences: interdisciplinary nature of AI. Membership
a. The first is the global scientific community.23 could be rotated periodically within the overall
The shift of fundamental research on AI to mandate of 3–5 years.
private corporations, driven in part by the cost of b. The panel would meet virtually (and in-person
computational power, has led to concerns that as a plenary, perhaps twice a year). Meetings
such research may be unduly driven by financial could rotate between cities hosting relevant
interests. A scientific panel could encourage United Nations entities, including in global South
greater research in public institutions worldwide locations. It should be encouraged to form
focused on the public good. thematic working groups, adding additional
b. Secondly, regular independent assessments members as needed and engaging networks
would inform Member States, policymakers of academic partners. It could explore inviting
and other processes recommended in this participation in these working groups from
report. An annual risk survey from the world’s relevant United Nations entities.24
experts would help to shape the agenda of c. The panel would operate independently,
the AI governance dialogues proposed in particularly in relation to its findings and
recommendation 2. The state-of-the-art report conclusions, with support from a United
would inform the development of standards Nations-system team drawn from the proposed
proposed in recommendation 3, as well as the AI office and relevant United Nations agencies,
capacity development network proposed in such as ITU and the United Nations Educational,
recommendation 4. Scientific and Cultural Organization (UNESCO).
c. Thirdly, through its public reports, it could d. It should partner with and build on research
serve as an impartial source of high-quality efforts led by other international institutions
information for the public. such as OECD and the Global Partnership
on Artificial Intelligence, and other relevant

22 It could build, in particular, upon existing sectoral or regional panels already operating.
23 It could also conduct outreach to broader audiences, including civil society and the general public.
24 For a list of United Nations entities active in this area, see figure 9.

Final Report 49
processes such as the recent scientific report 100 By drawing on the unique convening power of the
on the risks of advanced AI commissioned by United Nations and inclusive global reach across
the United Kingdom,25 and relevant regional stakeholder groups, an international scientific panel
organizations. can deliver trusted scientific collaboration processes
e. A steering committee would develop a research and outputs and correct information asymmetries
agenda ensuring the inclusivity of views and in ways that address the representation and
incorporation of ethical considerations, oversee coordination gaps identified in paragraphs 66 and
the allocation of resources, foster collaboration 73, thereby promoting equitable and effective
with a network of academic institutions and international AI governance.
other stakeholders, and review the panel’s
activities and deliverables.

Box 9: Open versus closed AI systems


Among the topics discussed in our consultations was the ongoing debate over open versus closed AI systems.
AI systems that are open in varying degrees are often referred to as “open-source AI”, but this is somewhat of a
misnomer when compared with open-source software (code). It is important to recognize that openness in AI
systems is more of a spectrum than a single attribute.

One article explained that a “fully closed AI system is only accessible to a particular group. It could be an AI
developer company or a specific group within it, mainly for internal research and development purposes. On the
other hand, more open systems may allow public access or make available certain parts, such as data, code, or
model characteristics, to facilitate external AI development.”a

Open-source AI systems in the generative AI field present both risks and opportunities. Companies often cite “AI
safety” as a reason for not disclosing system specifications, reflecting the ongoing tension between open and
closed approaches in the industry. Debates typically revolve around two extremes: full openness, which entails
sharing all model components and data sets; and partial openness, which involves disclosing only model weights.

Open-source AI systems encourage innovation and are often a requirement for public funding. On the open
extreme of the spectrum, when the underlying code is made freely available, developers around the world can
experiment, improve and create new applications. This fosters a collaborative environment where ideas and
expertise are readily shared. Some industry leaders argue that this openness is vital to innovation and economic
growth.

However, in most cases, open-source AI models are available as application programming interfaces. In this case,
the original code is not shared, the original weights are never changed and model updates become new models.

Additionally, open-source models tend to be smaller and more transparent. This transparency can build trust,
allow for ethical considerations to be proactively addressed, and support validation and replication because users
can examine the inner workings of the AI system, understand its decision-making process and identify potential
biases.

a Angela Luna, “The open or closed AI dilemma”, 2 May 2024. Available at https://bipartisanpolicy.org/blog/the-open-or-closed-ai-dilemma.

25 International Scientific Report on the Safety of Advanced AI: Interim Report. Available at https://gov.uk/government/publications/international-scientific-report-
on-the-safety-of-advanced-ai.

50 Governing AI for Humanity


Box 9: Open versus closed AI systems (continued)
Closed AI systems offer greater control to their developers. Additionally, closed-source systems can be more
streamlined and efficient, as the codebase is not constantly evolving through public contributions. Many
companies regard full openness as impractical and promote partial openness as the only feasible option.
However, this viewpoint overlooks the potential for a balanced approach that can achieve “meaningful openness”.b

Meaningful openness exists between the two extremes of the spectrum and can be tailored to different use cases.
This balanced method fosters safe, innovative and inclusive AI development by enabling public scrutiny and
independent auditing of disclosed training and fine-tuning data. Openness, being more than merely sharing model
weights, can propel innovation and inclusion, helping applications in research and education.

The definition of “open-source AI” is evolving,c and is often influenced by corporate interests as illustrated in figure
11. To address this, we recommend initiating a process, coordinated by the above-proposed international scientific
panel, to develop a well-rounded and gradient approach to openness. This would enable meaningful, evidence-
based approaches to openness, helping users and policymakers to make informed choices about AI models and
architectures.

Data disclosure – even if limited to key elements – is essential for understanding model performance, ensuring
reproducibility and assessing legal risks. Clarification around gradations of openness can help to counter
corporate “open-washing” and foster a transparent tech ecosystem.

It is also important that, as the technology matures, we consider the governance regimes for the application of
both open and closed AI systems. We need to develop responsible AI guidelines, binding norms and measurable
standards for developers and designers of products and services that incorporate AI technologies, as well as for
their users and all actors involved throughout their life cycle.

Figure 11: Corporate interests and openness


Considerations

Internal research only Community research


High risk control Low risk control
Low auditability High auditability
Limited perspectives Broader perspectives

Gated to public
Levels of

Gradual /
Access

Hosted Cloud-based /
Fully closed staged Downloadable Fully open
access API access
release
(developer)

PaLM (Google)
System

GPT-2 (OpenAI) DALL-E 2 (OpenAI) BLOOM


Gopher (DeepMind) OPT (Meta)
Stable Diffusion Midjourney GPT-3 (OpenAI) (BigScience)
Imagen (Google) Craiyon (craiyon)
(Stability AI) (Midjourney) GPT-J (EleutherAI)
Make-A-Video (Meta)

Source: Irene Solaiman, “The gradient of generative AI release: methods and considerations”, Proceedings of the 2023 Association for Computing Machinery
(ACM) Conference on Fairness, Accountability, and Transparency (June 2023), pp. 111–122.

b Inspired by Andreas Liesenfeld and Mark Dingemanse, “Rethinking open source generative AI: open-washing and the EU AI Act”, The 2024 ACM
Conference on Fairness, Accountability, and Transparency (FAccT ’24) (June 2024).
c The Open Source AI Definition – draft v. 0.0.3. Available at https://opensource.org/deepdive/drafts/the-open-source-ai-definition-draft-v-0-0-3.

Final Report 51
B. Common ground
c. Share voluntarily significant AI incidents that
stretched or exceeded the capacity of State
agencies to respond; and
101 Alongside a common understanding of AI, d. Discuss reports of the international scientific
common ground is needed to establish governance panel on AI, as appropriate.
approaches that are interoperable across
jurisdictions and grounded in international norms, 103 International governance of AI is currently a
such as the Universal Declaration of Human Rights fragmented patchwork at best. There are 118
(see principle 5 above). countries that are not parties to any of the seven
recent prominent non-United Nations AI governance
102 This is required at the global level not only for initiatives with intergovernmental tracks26 (see
equitable representation, but also for averting fig. 8). Representation gaps occur even among
regulatory “races to the bottom” while reducing the top 60 AI capacity countries, highlighting the
regulatory friction across borders, maximizing selectiveness of international AI governance today
technical and ontological interoperability, and (see fig. 12).
detecting and responding to incidents emanating
from decisions along AI’s life cycle which span 104 An inclusive policy forum is needed so that all
multiple jurisdictions. Member States, drawing on the expertise of
stakeholders, can share best practices that foster
Policy dialogue on AI development while furthering respect, protection and

governance fulfilment of all human rights, promote interoperable


governance approaches and monitor for common

Recommendation 2: Policy dialogue on AI risks that warrant further policy interventions.

governance
105 This does not mean global governance of all
aspects of AI (which is impossible and undesirable,
We recommend the launch of a twice-yearly
given States’ diverging interests and priorities). Yet,
intergovernmental and multi-stakeholder policy
exchanging views on AI developments and policy
dialogue on AI governance on the margins of
responses can set the framework for international
existing meetings at the United Nations. Its purpose
cooperation.
would be to:
a. Share best practices on AI governance that
foster development while furthering respect,
106 The United Nations is uniquely placed to facilitate
such dialogues inclusively in ways that help Member
protection and fulfilment of all human rights,
States to work together effectively. The United
including pursuing opportunities as well as
Nations system’s existing and emerging suite of
managing risks;
norms can offer strong normative foundations for
b. Promote common understandings on the
concerted action, grounded in the Charter of the
implementation of AI governance measures by
United Nations, human rights and other international
private and public sector developers and users
law, including environmental law and international
to enhance international interoperability of AI
humanitarian law, as well as the SDGs and other
governance;
international commitments.27

26 These initiatives are not always directly comparable. Some reflect the work of existing international or regional organizations, while others are based on ad hoc invitations
from like-minded countries.
27 See, for example, the Charter of the United Nations (preamble, purposes and principles, and Articles 13, 55, 58 and 59). See also core international instruments on human
rights (Universal Declaration of Human Rights; International Covenant on Civil and Political Rights; International Covenant on Economic, Social and Cultural Rights;
International Convention on the Elimination of All Forms of Racial Discrimination; Convention on the Rights of the Child; Convention on the Elimination of All Forms of
Discrimination against Women; Convention against Torture; Convention on the Rights of Persons with Disabilities; Convention on the Rights of Migrants; International
Convention for the Protection of All Persons from Enforced Disappearance); instruments on international human rights law (Geneva Conventions; Convention on Certain
Conventional Weapons; Genocide Convention; Hague Convention); instruments on related principles such as distinction, proportionality and precaution and the 11
principles on Lethal Autonomous Weapons Systems adopted within the Convention on Certain Conventional Weapons); disarmament and arms control instruments
in terms of prohibitions on weapons of mass destruction (Treaty on the Non-Proliferation of Nuclear Weapons; Chemical Weapons Convention; Biological Weapons
Convention); environmental law instruments (United Nations Framework Convention on Climate Change; Convention on the Prohibition of Military or Any Other Hostile Use of
Environmental Modification Techniques); the Paris Agreement and related principles such as precautionary principle, integration principle and public participation; and non-
binding commitments on the 2030 Agenda for Sustainable Development, gender and ethics, such as the UNESCO Recommendation on the Ethics of Artificial Intelligence.
Figure 12: Top 60 AI countries (2023 Tortoise Index) in the sample of major plurilateral
AI governance initiatives with intergovernmental tracks
party to

*Including jurisdictions such as the Holy See and the European Union.
Sources:
• OECD, Recommendation of the Council on Artificial Intelligence (adopted 21 May 2019), available at https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0449.
• G20, AI Principles (June 2019), available at https://www.mofa.go.jp/policy/economy/g20_summit/osaka19/pdf/documents/en/annex_08.pdf.
• GPAI, 2022 ministerial declaration (22 November 2022), available at https://one.oecd.org/document/GPAI/C(2022)7/FINAL/en/pdf.
• Bletchley Declaration (1 Nov 2023), available at https://gov.uk/government/publications/ai-safety-summit-2023-the-bletchley-declaration/the-bletchley-declaration-by-countries-attending-the-ai-safety-summit-1-2-november-2023.
• G7, Hiroshima AI Process G7 Digital & Tech Ministers’ Statement (1 Dec 2023), available at https://www.soumu.go.jp/hiroshimaaiprocess/pdf/document02_en.pdf.
• Council of Europe, Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law (adopted 17 May 2024), available at https://coe.int/en/web/artificial-intelligence/the-framework-convention-on-artificial-intelligence.
• Seoul Ministerial Statement for advancing AI safety, innovation and inclusivity, AI Seoul Summit (22 May 2024).
• Tortoise Media, Global AI Index (2023), available at https://tortoisemedia.com/intelligence/global-ai/#rankings.
107 Combined with expertise from the international b. One portion of each dialogue session might
scientific panel and capacity development (see focus on national approaches led by Member
recommendations 1, 4 and 5), inclusive dialogue at States, with a second portion sourcing expertise
the United Nations can help States and companies and inputs from key stakeholders – in particular,
to update their regulatory approaches and technology companies and civil society
methodologies to keep pace with accelerating AI representatives.
in an interoperable way that promotes common c. Governmental participation could be open to
ground. Some of the distinctive features of the all Member States, or a regionally balanced
United Nations can be helpful in this regard: grouping (for more focused discussion
a. Anchoring inclusive dialogue in the United among a rotating, representative interested
Nations suite of norms, including the Charter subset), or a combination of both, calibrated
of the United Nations and human rights and as appropriate to different agenda items or
international law, can promote a “race to the segments over time, as the technology evolves
top” in governance approaches. Conversely, and global concerns emerge or gain salience. A
without the universal global membership of the fixed geometry might not be helpful, given the
United Nations, international collective action dynamic nature of the technology and the policy
faces greater pressure to succumb to regulatory context.
“races to the bottom” between jurisdictions on d. In addition to the formal dialogue sessions,
AI safety and scope of use. multi-stakeholder engagement on AI policy
b. The global membership of the United Nations could also leverage other existing mechanisms
can also enable coordination between existing such as the ITU AI for Good meeting, the
sub-global initiatives for greater compatibility annual Internet Governance Forum meeting,
between them. Many in our consultations the UNESCO AI ethics forum and the United
called for the United Nations to be a key space Nations Conference on Trade and Development
for enabling soft coordination across existing (UNCTAD) eWeek, open for participation to
regional and plurilateral initiatives, taking representatives of all Member States on a
into account diverse values across different voluntary basis.
cultures, languages and regions. e. In line with the inclusive nature of the dialogue,
c. The Organization’s predictable, transparent, rule- discussion agendas could be broad to
based and justifiable procedures can enable encompass diverse perspectives and concerns.
continuous political engagement to bridge non- For instance, twice-yearly meetings could focus
likeminded countries, and moderate dangerous more on opportunities across diverse sectors
contestation. In addition to building confidence, in one meeting, and more on risk trends in the
relationships and communication lines for times other.29 This could include uses of AI to achieve
of crisis, reliably inclusive dialogues can foster the SDGs, how to protect children, minimize
new norms, customary law and agreements that climate impact, as well as an exchange on
enhance cooperation among States. approaches to manage risks. Meetings could
also include a discussion of definitions of
108 Operationally: terms used in AI governance and AI technical
a. A policy dialogue could begin on the margins standards, as well as reports of the international
of existing meetings in New York, such as the scientific panel, as appropriate.
General Assembly, Geneva and locations in the
28

global South.

28 Analogous to the high-level political forum in the context of the SDGs that takes place under the auspices of the Economic and Social Council.
29 Relevant parts of the United Nations system could be engaged to highlight opportunities and risks, including ITU on AI standards; ITU, UNCTAD, UNDP and
the Development Coordination Office on AI applications for the SDGs; UNESCO on ethics and governance capacity; the Office of the United Nations High
Commissioner for Human Rights (OHCHR) on human rights accountability based on existing norms and mechanisms; the Office for Disarmament Affairs on
regulating AI in military systems; UNDP on support to national capacity for development; the Internet Governance Forum for multi-stakeholder engagement and
dialogue; WIPO, ILO, WHO, FAO, the World Food Programme, UNHCR, UNESCO, the United Nations Children’s Fund, the World Meteorological Organization and
others on sectoral applications and governance.

54 Governing AI for Humanity


f. In addition, diverse stakeholders – in particular
technology companies and civil society
AI standards exchange
representatives – could be invited to engage
Recommendation 3: AI standards exchange
through existing institutions detailed below, as
well as policy workshops on particular aspects
We recommend the creation of an AI standards
of AI governance such as limits (if any) of open-
exchange, bringing together representatives from
source approaches to the most advanced forms
national and international standard-development
of AI, thresholds for tracking and reporting of
organizations, technology companies, civil society
AI incidents, application of human rights law to
and representatives from the international scientific
novel use cases, or the use of competition law/
panel. It would be tasked with:
antitrust to address concentrations of power
a. Developing and maintaining a register of
among technology companies.30
definitions and applicable standards for
g. The proposed AI office could also curate a
measuring and evaluating AI systems;
repository of AI governance examples, including
b. Debating and evaluating the standards and the
legislation, policies and institutions from
processes for creating them; and
around the world for consideration of the policy
c. Identifying gaps where new standards are
dialogue, working with existing efforts, such as
needed.
OECD.

111 When AI systems were first explored, few standards


109 Notwithstanding the two General Assembly
existed to help to navigate or measure this new
resolutions on AI in 2024, there is currently
frontier. The Turing Test – of whether a machine can
no mandated institutionalized dialogue on
exhibit behaviour equivalent to (or indistinguishable
AI governance at the United Nations that
from) a human being – captured the popular
corresponds to the reliably inclusive vision of this
imagination, but is of more cultural than scientific
recommendation. Similar processes do exist at
significance. Indeed, it is telling that some of
the international level, but primarily in regional or
the greatest computational advances have been
plurilateral constellations (para. 57), which are not
measured by their success in games, such as when
reliably inclusive and global.
a computer could beat humans at chess, Go, poker
or Jeopardy. Such measures were easily understood
110 Complementing a fluid process of plurilateral and
by non-specialists, but were neither rigorous nor
regional AI summits,31 the United Nations can
particularly scientific.
offer a stable home for dialogue on AI governance.
Inclusion by design – a crucial requirement for
playing a stabilizing role in geopolitically delicate
112 More recently, there has been a proliferation of
standards. Figure 13 illustrates the increasing
times – can also address representation and
number of relevant standards adopted by ITU, the
coordination gaps identified in paragraphs 64 and
International Organization for Standardization (ISO),
72, promoting more effective collective action on AI
the International Electrotechnical Commission
governance in the common interest of all countries.
(IEC) and the Institute of Electrical and Electronics
Engineers (IEEE).32

30 Such a gathering could also provide an opportunity for multi-stakeholder debate of any hardening of the global governance of AI. These might include, for
example, prohibitions on the development of uncontainable or uncontrollable AI systems, or requirements that all AI systems be sufficiently transparent so that
their consequences can be traced back to a legal actor that can assume responsibility for them.
31 Although multiple AI summits have helped a subset of 20–30 countries to align on AI safety issues, participation has been inconsistent: Brazil, China and
Ireland endorsed the Bletchley Declaration in November 2023, but not the Seoul Ministerial Statement six months later (see fig. 12). Conversely, Mexico and
New Zealand endorsed the Seoul Ministerial Statement, but did not endorse the Bletchley Declaration.
32 Many new standards are also emerging at the national and multinational levels, such as the United States White House Voluntary AI Commitments and the
European Union Codes of Practice for the AI Act.

Final Report 55
Figure 13: Number of standards related to AI
120 117

110 9
IEEE
101
100 ISO/IEC 8
ITU 29
90
Further ISO and IEEE standards under development 22
80

70

60 58
3
50
16
40 79
32 71
30 2
10
20 16 39
6
10 6 20
3
3 10
0 1 2 3
2018 2019 2020 2021 2022 2023 2024
(Jan.–Jun.)
Sources: IEEE, ISO/IEC, ITU, World Standards Cooperation (based on June 2023 mapping, extended through inclusion of standards related to AI).

113 Two trends stand out. First, these standards were there are few agreed standards concerning energy
largely developed to address specific questions. consumption and AI. A lack of integration of
There is no common language and many terms human rights considerations into standard-setting
that are routinely used with respect to AI – fairness, processes is another gap to be bridged.33
safety, transparency – do not have agreed
definitions or measurability (despite recent work by 116 This has real costs. In addition to the concerns of
OECD and the National Institute of Standards and Member States and diverse individuals, many of our
Technology adopting a new approach for dynamic consultations revealed the concern of businesses
systems, such as AI). (including small and medium-sized enterprises in
the developing world) that fragmented governance
114 Secondly, there is a disjunction between those and inconsistent standards raise the costs of doing
standards that were adopted for narrow technical business in an increasingly globalized world.
or internal validation purposes, and those that are
intended to incorporate broader ethical principles. 117 This report is not proposing that the United Nations
Computer scientists and social scientists often adds to this proliferation of standards. Instead,
advance different interpretations of the same drawing on the expertise of the international
concept, and a joined-up paradigm of socio- scientific panel (proposed in recommendation 1),
technical standards is promising but remains and incorporating members from the various entities
aspirational (see box 10). that have contributed to standard-setting, as well
as representatives from technology companies and
115 The result is that we have an emerging set of civil society, the United Nations system could serve
standards that are not grounded in a common as a clearing house for AI standards that would
understanding of meaning or are divorced from apply globally.34
the values they were intended to uphold. Crucially,

33 See A/HRC/53/42 (Human rights and technical standard-setting processes for new and emerging digital technologies: Report of the Office of the United
Nations High Commissioner for Human Rights) and Human Rights Council resolution 53/29 (New and emerging digital technologies and human rights).
34 Even this may seem a challenging task, but progress towards a global minimum tax deal shows the possibility of collective action even in economically and
politically complex areas.

56 Governing AI for Humanity


Box 10: Standards applicable to AI safety
A comprehensive approach to AI safety involves understanding the capabilities of advanced AI models, adopting
standards for safe design and deployment, and evaluating both the systems and their broader impacts.

In the past, AI standards focused mainly on technical specifications, detailing how systems should be built and
operated. However, as AI technologies increasingly impact society, there is a need to shift to a socio-technical
paradigm. This shift acknowledges that AI systems do not exist in a vacuum; they interact with human users
and affect societal structures. Modern AI standards can integrate ethical, cultural and societal considerations
alongside technical requirements. In the context of safety, this includes ensuring reliability and interpretability, as
well as assessing and mitigating risks to individual and collective rights,a national and international security, and
public safety in different contexts.

A primary objective of the recently established AI safety national institutes is to ensure consistent and effective
approaches to AI safety. Harmonizing such approaches would allow AI systems to meet high safety benchmarks
internationally, enabling cross-border innovation and trade while maintaining rigorous safety protocols.

As far as “safety” is contextual, involving various stakeholders and cultures in creating such standards enhances
their relevance and effectiveness and helps with shared understanding of definitions and concepts. By
incorporating diverse perspectives, protocols can more thoroughly address the wide range of potential risks and
benefits associated with AI technologies.

a See A/HRC/53/42 (Human rights and technical standard-setting processes for new and emerging digital technologies: Report of the Office of the
United Nations High Commissioner for Human Rights) and Human Rights Council resolution 53/29 (New and emerging digital technologies and
human rights).

118 The Organization’s added-value would be to foster 120 Supported by the proposed AI office, the standards
exchange among the broadest set of standards exchange would also benefit from strong ties to the
development organizations to maximize global international scientific panel on technical questions
interoperability across technical standards, and the policy dialogue on moral, ethical, regulatory,
while infusing emerging knowledge on socio- legal and political questions.
technical standards development into AI standards
discussions. 121 If appropriately agreed, ITU, ISO/IEC and IEEE
could jointly lead on an initial AI standards summit,
119 Collecting and distributing information on AI with annual follow-up to maintain salience and
standards, drawing on and working with existing momentum. To build foundations for a socio-
efforts such as the AI Standards Hub, would enable 35
technical approach incorporating economic, ethical
participants from across standards development and human rights considerations, OECD, the World
organizations to converge on common language in Intellectual Property Organization (WIPO), the World
key areas. Trade Organization, the Office of the United Nations
High Commissioner for Human Rights (OHCHR), ILO,
UNESCO and other relevant United Nations entities
should also be involved.36

35 See https://aistandardshub.org.
36 This could include relevant sectoral, national and regional standards organizations.

Final Report 57
122 The standards exchange should also inform the
capacity-building work in recommendation 4, C. Common benefits
ensuring that the standards support practice on
the ground. It could share information about tools 124 The 2030 Agenda with its 17 SDGs can lend a unique
developed nationally or regionally that enable self- purpose to AI, bending the arc of investments away
assessment of compliance with standards. from wasteful and harmful use and towards global
development challenges. Otherwise, investments
123 The report does not presently propose that the will chase profits even at the cost of imposing
United Nations should do more than serve as a negative externalities on others. Another signal
forum for discussing and agreeing on standards. To contribution that the United Nations can make is
the extent that safety standards are formalized over linking the positive application of AI to an assurance
time, these could serve as the basis for monitoring of the equitable distribution of its opportunities (box
and verification by an eventual agency. 11).

Box 11: AI and the SDGs


AI’s potential in advancing science (box 1) and creating economic opportunities (box 2) underlie hope that AI
can accelerate progress in achieving the SDGs. A 2023 review of relevant evidence argued that AI may act as
an enabler on 134 targets (79 per cent) across all SDGs, generally through technological improvement that may
enable certain prevailing limitations to be overcome.a

An overview of current expert perceptions is illustrated by the results of an opportunity scan exercise
commissioned for our work, which surveyed over 120 experts from 38 countries about their expectations for AI’s
positive impact in terms of scientific breakthroughs, economic activities and the SDGs. The survey asked only
about possible positive implications of AI.

Overall, experts had mixed expectations on how soon AI could have a major positive impact (see also fig. 14):
• They were most optimistic about accelerating scientific discoveries, with 7 in 10 saying that it is likely
that AI would cause a major positive impact in the next three years or sooner in high/upper-middle-
income countries, and 28 per cent predicting the same for lower-middle/lower-income countries.
• Around 5 in 10 expected major positive impact on increasing economic activity as likely in the next three
years or sooner in high/upper-middle-income countries, and 32 per cent expected the same in lower-
middle/lower-income countries.
• A total of 46 per cent expected major positive impact on progress on the SDGs as likely in the next three
years or sooner in high/upper-middle-income countries. However, only 21 per cent expected this in lower-
middle/lower-income countries, with 4 in 10 experts gauging such major positive impact on the SDGs as
likely to be at least 10 years away in such places.

a See Ricardo Vinuesa and others, “The role of artificial intelligence in achieving the Sustainable Development Goals”. Nature Communication,
vol. 11, No. 233 (January 2020). This study also argued that 59 targets (35%, also across all SDGs) may experience a negative impact from the
development of AI.

58 Governing AI for Humanity


Box 11: AI and the SDGs (continued)

Figure 14: Experts’ expectations regarding timing of major positive


impact of AI by area

“By when do you expect it likely (50% chance or Already occurring Within next 3 years Longer than 10 years / never
more) that AI will cause a major positive impact…?” Within next 18 months Within next 10 years

High/upper-middle
24% 10% 36% 24% 5% n = 111
Accelerating -income countries
scientific
discoveries Lower-middle/lower
9% 2% 17% 37% 35% 65
-income countries

High/upper-middle
Generally 15% 5% 32% 41% 8% 101
-income countries
increasing
economic
activity Lower-middle/lower
9% 4% 19% 43% 25% 53
-income countries

High/upper-middle
9% 10% 27% 31% 24% 93
Progress -income countries
on the
SDGs Lower-middle/lower
9% 12% 37% 42% 65
-income countries

Note: Excludes “Don’t know” / “No opinion” and blank responses. Only respondents reporting relevant knowledge were asked about lower-middle/lower-income countries.
Source: OSET AI Opportunity Scan survey, 9-21 August 2024.

Figure 15: Experts’ expectations regarding major positive impact of AI in


the next three years, by area and SDG
“In the next three years, how much do you expect AI to directly 1 Don’t expect any positive impact 4 Expect major positive impact
contribute towards… in lower-middle/lower-income countries?” 2 Expect minor positive impact 5 Expect transformative positive impact
H = High/upper-middle-income countries
3 Expect positive impact
L = Lower-middle/lower-income countries
n = 112 65 111 67 89 58 92 57 72 50 77 55 72 53 71 52 61 42 59 40 80 54 85 56 75 50 72 48 72 53 78 55
100%
21% 2%

4%

11%

17%
19%
20%

90%
29%

31%
33%

35%
29%

38%

38%

39%
42%
42%

80%
44%
44%
44%
45%

47%
50%

52%
52%

53%
54%
34%

60%

61%
63%

70%
28%

64%

65%
67%
33%

74%
42%

60%
40%

24%

32%
36%

33%

50%
27%

27%
41%

28%

29%

34%

29%
29%
30%
30%

40%
29%
34%

38%

22%
25%

29%

33%

38%
37%

23%

30%
22%
21%
32%

25%
21%
26%

24%

22%
31%

20%
25%
23%

11%
10% 15%
31%

20%
7% 18%
26%

25%
22%

23%

7% 16%

18%
10% 11%

17%

6% 15%
23%

2% 14%

5% 14%
3% 15%

4% 13%

10%
4% 10%

17%

17%

2% 13%
10%

11%
1% 9%
10%
12%

11%

11%
9%

8%
8%
5%

6%
7%
5%

3%
5%
4%
3%

4%
3%

3%

2%

2%
2%

2%
2%

2%
1%

1%

1%

1%

1%

0%
H L H L H L H L H L H L H L H L H L H L H L H L H L H L H L H L
SDG 7 -
scientific

activity

SDG 10 - Reduced
SDG 3 - Good

clean energy

SDG 6 - Clean

justice and strong

equality
well-being

Sustainable

SDG 5 - Gender
economic

health and

SDG 4 -

Affordable and

SDG 13 -

SDG 11 -

water and

poverty

production
sanitation

SDG 15 -

below water

SDG 2 -
Zero hunger

SDG 1 - No

institutions
discoveries

Increasing

SDG 14 - Life

SDG 12 - Responsible
Quality

cities and

consumption and
SDG 16 - Peace,
Accelerating

education

Climate

Life on land
action

inequalities
communities

Note: Excludes “Don’t know” / “No opinion” and blank responses. Only respondents reporting relevant knowledge were asked about
lower-middle/lower-income countries. Did not ask about SDGs 8, 9 and 17. Source: OSET AI Opportunity Scan survey, 9-21 August 2024.

Final Report 59
Box 11: AI and the SDGs (continued)
Experts expected greater positive impact of AI in the next three years in higher-income countries across all areas
surveyed, including accelerating scientific discoveries, increasing economic activityb and in the 14 SDG areas
asked about (see fig. 15). Experts were most optimistic about AI’s positive impact on health and education (SDGs
3 and 4), where 20–25 per cent of experts expected major or transformative positive impact of AI in the next
three years in high/upper-middle-income countries. They were least optimistic regarding AI’s positive impact on
gender equality and inequalities (SDGs 5 and 10), with 2 in 3 expecting AI to have no positive impact on reducing
inequalities within or between countries in either higher or lower-income countries.

AI may be expected to have earlier and greater impacts in higher-income countries, in part due to barriers holding
back lower-middle and lower-income countries (see fig. 16). Missing enablers – from poorer infrastructure, to lack
of domestic policy and international governance – were cited by more than half of respondents as important factors
causing additional difficulty for lower-income countries in harnessing AI for economic activity and SDG progress.

Figure 16: Experts’ ratings of barriers to harnessing AI to drive additional


economic activity and progress on the SDGs in lower-middle/lower-
income countries

“How important do you consider the below factors in causing 1 Not important 3 Somewhat important 5 Very important
additional difficulty for lower-middle/lower-income countries 2 Slightly important 4 Important
(compared with high/upper-middle-income countries) in
harnessing AI to drive additional economic activity and progress
1 2 3 4 5
on the SDGs?”
Poorer technological / communications infrastructure 8% 23% 65% n = 71 4.46
Less access to compute 8% 24% 63% 71 4.44
Less ability to train domestic talent to train and
8% 27% 59% 71 4.38
develop new models
Less ability to retain local talent when trained (“brain drain”) 7% 24% 61% 70 4.37
Less ability to train domestic talent to apply
13% 30% 54% 71 4.30
and deploy existing models
More difficulty collecting new necessary data 14% 30% 52% 71 4.28
Less ability to access existing datasets
6% 17% 22% 53% 72 4.17
(e.g. proprietary data)
Less access to existing models 7% 21% 28% 40% 72 3.93
Less ability to combine fragmented data 9% 19% 22% 45% 69 3.91
Lack of partnership between domestic actors
10% 20% 27% 39% 70 3.86
(e.g. domestic government and businesses)
Lack of partnership between domestic & regional/international actors 14% 17% 37% 30% 70 3.80
Lack of effective domestic policy to enable AI 14% 17% 28% 36% 69 3.77
Lack of international governance / interoperability / standards 14% 13% 31% 34% 70 3.71

Note: Excludes “Don’t know” / “No opinion” and blank responses. Only respondents reporting relevant knowledge were asked about lower-middle/lower-income countries.
Source: OSET AI Opportunity Scan survey, 9-21 August 2024.

These results underline the tentativeness of AI’s eventual contribution to the SDGs, and how it remains highly
dependent on missing enablers. This is particularly so in less developed countries, which already lack much of
what more-developed countries have, from infrastructure to policy. Without cooperation to build capacity and
facilitate access to key enablers, existing AI divides could further widen and become entrenched, limiting AI’s
ability to meaningfully contribute to progress on science, economic benefit and progress on the SDGs before 2030.

b The share of experts expecting “major positive impact” on increasing economic activity and accelerating scientific discovery over three years is
higher in the first chart than the second chart. This may be due to the qualifier “by when do you expect it likely (50% chance or more) that AI will
cause a major positive impact” (emphasis added) in the question responses depicted in the first chart, which is absent in the second.

60 Governing AI for Humanity


125 As we argued in our interim report, this depends connectivity for running data centres, maintaining
largely on access to talent, compute and data, in consistent computer operations, accessing global
ways that help cultural and linguistic diversity to data sets, engaging in international research
flourish. Governance itself can be a key enabler, collaborations and using cloud-based AI tools.
aligning incentives, engendering trust and Therefore, we align ourselves with calls for investing
sustainable practices while promoting collaboration in basic digital infrastructure, which is a prerequisite
across borders and subject domains. Without for developing countries to participate in and benefit
a comprehensive and inclusive approach to AI from AI advancements.
governance, the potential of AI to contribute
positively to the SDGs could be missed, and its 130 Building AI capacity is vital to ensuring that
deployment could inadvertently reinforce existing individuals across the globe, regardless of their
disparities and biases. region’s development stage, can benefit from AI
advancements. Strategic capacity-building, backed
126 During extensive consultations conducted by the by adequate funding, is also essential to making AI
Advisory Body on topics such as education, health, technologies effective, sustainable and in the public
data, gender, children, peace and security, creative interest – key for global development efforts. Below,
industries and work, it became evident that AI holds we examine three critical enablers of national AI
substantial potential to significantly accelerate capabilities: the availability of technical expertise,
progress on the SDGs owing to its capabilities to access to compute and the availability of quality
boost innovation and delivery in various critical data. We then recommend specific actions.
areas.
Talent
127 However, AI is not a panacea for development
challenges; it is one component within a broader
131 The ability of societies around the world to
set of solutions, and may even exacerbate some of participate in the beneficial outcomes of AI depends,
these challenges, such as climate change. To truly first and foremost, on people. It is important to
unlock AI’s potential to address societal challenges, acknowledge that not every society needs cadres
collaboration among governments, academia, of computer scientists for building their own
industry and civil society is crucial. models. However, regardless of whether technology
is bought, borrowed or built, human resources
128 The effectiveness of AI solutions depends on
are needed to understand the capabilities and
the quality and availability of data, and there
limitations of AI and harness AI-enabled use cases
are significant concerns about quality and
appropriately.
representativeness in SDG-relevant data sets,
which may fail to reflect relevant realities of certain
132 Such a capacity – primarily in the public sector, but
populations. Further, AI solutions designed by AI also in academia, business and civil society – will
experts without full knowledge of the intersecting enhance the effectiveness of AI strategies and their
domains of application often work in silico, and implementation across various sectors. Nurturing
are not robust or impactful enough in actual AI-related human capacity will also be vital for
development settings. That is the reason why AI preserving the world’s cultural and linguistic diversity
solutions must be designed collaboratively and and building high-quality data sets for future AI
implemented with a deep understanding of their development. In essence, this is capacity-building
social, economic and cultural contexts. They must fit for public interest AI.
into broader local and national strategies for digital
transformation and addressing digital divides. 133 Fostering human resources in diverse settings with
youthful demographics, such as Africa (one third
129 For example, AI capabilities in low- and lower- of the global workforce will be African within the
middle-income countries cannot be achieved first half of this century) will also be vital for the
without securing reliable electricity and Internet

Final Report 61
future global talent pipeline. Enhancing the capacity configurations and scheduling demanding tasks,
of women in tech needs to be focused on closing while preserving priority of time-critical use (such as
the existing gender gap, on the one hand, and for meteorological predictions).
avoiding the gender gap in AI, on the other hand.
The AI sector also needs more women in leadership 137 Moreover, without talent and data, compute alone
positions to embed gender perspectives in AI is of no value. In the proposed global fund for AI,
governance. This starts with enabling increasing AI we consider how to address all three through a
talent opportunities for girls. combination of financial and in-kind support.

Compute Data

134 Despite ongoing efforts to develop less compute- 138 Although many discussions about the economics of
hungry approaches to AI, the need for access to AI focus on the “war for talent” and competition over
affordable compute remains acute for training hardware, such as graphics processing units (GPUs),
capable AI models. This is one of the biggest
37 data are no less vital. Facilitating access to quality
barriers to entry in the field of AI for companies in training data at scale for training AI models by start-
the global South, but also many start-ups and small ups and small and medium-sized enterprises, as well
and medium-sized enterprises in the global North. as mechanisms to compensate data holders and
Of the top 100 high-performance computing clusters creators of training data in rights-respecting ways,
in the world capable of training large AI models, might be the most important enabler of a flourishing
none is hosted in a developing country. There is 38 AI economy. Pooling data for the public interest in
only one African country among the top 300. Two furthering specific SDGs is one key aspect (outlined
countries account for half of the world’s hyperscale in box 12), although it is not enough.
data centres. 39

139 In the context of AI, it is common to speak of


135 Most developers access compute infrastructure “misuse” of data (e.g. infringing on privacy) or
through cloud services; many have chosen to “missed” uses of data (failing to exploit existing data
partner with the large cloud companies to secure sets), but a related problem is “missing” data, which
reliable access to compute. It is possible that includes the large portions of the globe that are data
supply-chain issues may be resolved over time poor. One example is health care, where around half
and competition may lead to more diverse sources of the leading data sets can be traced to a dozen
of hardware, including high-performance chips organizations, with one in Europe, one in Asia and
for training models and AI accelerator chips for the rest in North America.40
deployment on mobile devices. However, for the
foreseeable future this constraint will remain a 140 Another example is agriculture, where data are
formidable barrier to a more globally inclusive AI required across a complex interplay of factors
innovation ecosystem. (such as climate, soil and crop management
practices) to enable useful AI models. Agriculture
136 Ironically, compute capacity can lie idle or get also often suffers from paucity of data and data-
outmoded quickly. There is potential value in fully collection infrastructures. Dedicated efforts are
using such capacity across depreciation cycles. needed to curate agriculture data sets particularly
However, there are hurdles to be overcome in in the context of climate change resilience for food
terms of interoperability of different hardware systems.

37 The Advisory Body is aware of a recent case where a company based in the global South spent $70 million for a 3-month training run for a large language
model. Owning the graphics processing units (GPUs) instead of renting them from cloud service providers would have cost many times less.
38 See https://top500.org/statistics/sublist; proxy indicator since most high-performance computing clusters do not have GPUs and are of limited use for
advanced AI.
39 UNCTAD, Digital Economy Report 2021 (Geneva, 2021).
40 See https://2022.internethealthreport.org/facts.

62 Governing AI for Humanity


Box 12: Pooling data for the public interest in SDG areas
Collaborative data and AI commons – where shared models are cross-trained on pooled data – can play a
key role in furthering the public interest where data would otherwise be missing or too sparse for AI benefits.
Cross-functional and multi-domain data pools could enable the development of transdisciplinary data sets that
encompass various SDG domains, derived from a variety of sources.

As an example, we can consider the complex issue of assessing the health impacts of climate change. To
effectively address this challenge, a transdisciplinary approach is essential, integrating epidemiological data on
the prevalence of diseases with meteorological data tracking climate variations. By pooling these distinct types
of data from countries worldwide, in a privacy-preserving manner, researchers may be able to use AI to identify
patterns and correlations that are not evident from isolated data sets.

Including data from all countries ensures comprehensive coverage, reflecting the global nature of climate change
and capturing diverse environmental impacts and health outcomes across different regions. The transdisciplinary
origins of the data enhance the predictive accuracy of models that aim to forecast future public health crises or
natural disasters driven by climate change.

141 Analogous to the problem of informal capital, those broad-based digital economy, as well as quality
whose data are not captured – from birth records to talent and data flows. Importantly, everyone
financial transactions – may be unable to participate will benefit from the mainstreaming of good AI
in the benefits of the AI economy, obtain government governance through such collaboration.
benefits or access credit. Use of synthetic data may
only partially offset the need for new data sets. 144 Cooperation should focus on nurturing AI talent,
boosting public AI literacy, improving capacity for AI
142 Feedback on our interim report noted that there governance, broadening access to AI infrastructure,
was insufficient articulation of how current cross- promoting data and knowledge platforms suited to
jurisdictional practices around sourcing, use and diverse cultural and regional needs, and enhancing
non-disclosure of AI training data threaten rights uptake of AI applications and service capabilities.
and result in economic concentration. It was Only such a comprehensive approach can ensure
recommended that we consider how international AI equitable access to AI benefits, so that no nation is
governance could enable and catalyse more diverse left behind.
participation in the leveraging of data for AI.
145 Many of the stakeholders we consulted emphasized
that detailed strategies should be outlined to pool
Building a core public international AI
global resources together to build capacity, catalyse
capacity for common benefit
collective action towards equitable sharing of
opportunities and close the digital divide.
143 Cutting across the above three enablers, advanced
economies have both the capability and duty to
facilitate AI capacity-building through international
collaboration. In turn, they will benefit from a more

Final Report 63
146
Capacity development From the Millennium Development Goals to the
SDGs, the United Nations has long contributed to
network the development of capacities of individuals and
institutions.41 Through the work of UNESCO, WIPO
Recommendation 4: Capacity development network and others, the United Nations has helped to uphold
the rich diversity of cultures and knowledge-making
We recommend the creation of an AI capacity traditions across the globe.
development network to link up a set of
collaborating, United Nations-affiliated capacity 147 At the same time, capacity development for AI
development centres making available expertise, would require a fresh approach, in particular
compute and AI training data to key actors. The cross-domain training to build a new generation of
purpose of the network would be to: multidisciplinary experts in areas such as public
a. Catalyse and align regional and global AI health and AI, or food and energy systems and AI.
capacity efforts by supporting networking
among them; 148 Capacity would also have to be linked to outcomes
b. Build AI governance capacity of public officials through hands-on training in sandboxes42 and
to foster development while furthering respect, collaborative projects pooling data and compute to
protection and fulfilment of all human rights; solve shared problems. Risk assessments, safety
c. Make available trainers, compute and AI training testing and other governance methodologies would
data across multiple centres to researchers and have to be built into this collaborative training
social entrepreneurs seeking to apply AI to local infrastructure.
public interest use cases, including via:
i. Protocols to allow cross-disciplinary 149 Given the urgency and scale of the challenge, we
research teams and entrepreneurs in suggest pursuing a strategic approach that pools
compute-scarce settings to access and brokers access to compute through a network
compute made available for training/ of high-performance computing nodes, incentivizes
tuning and applying their models the development of critical data sets in SDG-
appropriately to local contexts; relevant domains, promotes sharing of AI models,
ii. Sandboxes to test potential AI mainstreams best practices on AI governance and
solutions and learn by doing; creates cross-domain talent for public interest AI,
iii. A suite of online educational thus ensuring cross-cutting integration of human
opportunities on AI targeted at rights expertise.
university students, young researchers,
social entrepreneurs and public sector 150 In other words, instead of chasing critical
officials; and enablers one at a time through disjointed projects,
iv. A fellowship programme for promising we propose an all-at-once, holistic strategy
individuals to spend time in academic implemented through a chain of collaborating
institutions or technology companies. centres. Emerging initiatives on capacity
development and AI for the SDGs such as the
International Computation and AI Network (ICAIN)
initiative launched by Switzerland can help to create
the initial critical mass for this strategy.

41 The United Nations University has long been committed to capacity-building through higher education and research, and the United Nations Institute for
Training and Research has helped to train officials in domains critical to sustainable development. The UNESCO Readiness Assessment Methodology is a
key tool to support Member States in their implementation of the UNESCO Recommendation on the Ethics of Artificial Intelligence. Other examples include
the WHO Academy in Lyon, the UNCTAD Virtual Institute, the United Nations Disarmament Fellowship run by the Office for Disarmament Affairs and capacity-
development programmes led by ITU and UNDP.
42 Sandboxes have been developed by various national institutions, including financial and medical authorities, such as the Infocomm Media Development
Authority of Singapore.

64 Governing AI for Humanity


151 Ideally, there should be at least one or two nodes 155 Our hope is that the network would also promote an
in each region of the world. The two centres of alternative paradigm of AI technology development:
expertise participating in the Global Partnership bottom-up, cross-domain, cross-regional, open and
on Artificial Intelligence could join the United collaborative. Given the rising energy and other
Nations in supporting the capacity development costs of training and deploying AI models, and the
network. Academic institutions and private sector prospect of compute lying unused, it makes sense
contributors to capacity development could seek to link computational resource for access on a
affiliation through the closest regional node or an time-sharing basis, while leveraging such access for
international organization supporting the network. advancing cross-domain talent, data and AI models
for the SDGs.
152 We are particularly encouraged by the prospect
of cooperation among countries, for example
through federated access to compute and related
Global fund for AI
infrastructure. As noted in our interim report,
Recommendation 5: Global fund for AI
the European Organization for Nuclear Research
(CERN) offers useful lessons. A “distributed-CERN”
We recommend the creation of a global fund for
reimagined for AI, networked across diverse States
AI to put a floor under the AI divide. Managed by
and regions, could expand opportunities for greater
an independent governance structure, the fund
access to AI tools and expertise.
would receive financial and in-kind contributions
from public and private sources and disburse them,
153 We envision the capacity development network as
including via the capacity development network,
a catalyser of national and regional capabilities and
to facilitate access to AI enablers to catalyse local
not as a concentrator of hardware, talent and data.
empowerment for the SDGs, including:
By accelerating learning, it could catalyse national
a. Shared computing resources for model training
centres of excellence to stimulate the development
and fine-tuning by AI developers from countries
of local AI innovation ecosystems, addressing
without adequate local capacity or the means to
the underlying coordination and implementation
procure it;
gaps mentioned in paragraphs 73, 80 and 81.
b. Sandboxes and benchmarking and testing
National-level efforts could continue to employ
tools to mainstream best practices in safe
diagnosis tools such as the UNESCO AI Readiness
and trustworthy model development and data
Assessment Methodology to help to assess initial
governance;
maturity of countries, identify gaps and guide how
c. Governance, safety and interoperability
road maps for capacity-building can be tailored per
solutions with global applicability;
country and region, with the international network
d. Data sets and research into how data and
helping to address these gaps.
models could be combined for SDG-related
projects; and
154 The proposed AI office may be best placed to focus
e. A repository of AI models and curated data sets
on strategy, partnerships and affiliation to link up
for the SDGs.
nodes with the network, serving to connect rather
than reinvent. It could also help to broker access to
compute across the network. A node or nodes in the
156 The model of AI development and use proposed here
is analogous to the original vision of the Internet: a
network could serve as leads on specific aspects
distributed but connected infrastructure, interoperable
of training, host sandboxes or high-performance
and empowering. Public interest would be better
computing clusters for AI model development.
served by a marketplace in which AI models and
Nodes could collaborate on research programmes
the infrastructure and data that they rely on are
on topics such as privacy-preserving use of data,
interoperable, well-governed and trustworthy. This
new methods to link different types of hardware or
would not be achieved automatically. Dedicated efforts
data sets for model training, as well as ways to use
backed by sufficient resources would be essential.
AI models in combination with each other.

Final Report 65
Box 13: Global fund for AI: examples of possible investments
A relatively modest fund could help to create a minimum shared compute infrastructure for training small to
medium-sized models. Such models have important SDG potential, for example, for training farmers in their local
language.

This investment would also create a sandbox environment for developers to fine-tune existing open-source
models with their own contextual and high-quality data. Access to the compute and sandbox infrastructure could
be on a time share basis with reasonable usage fees contributing to meeting the maintenance and running costs.

A third use of the funding would be to help to curate gold standard data sets for select SDGs where the
commercial incentive is absent. The model development, testing and data curation efforts could come together
strategically in a powerful hands-on AI empowerment approach linked to concrete outcomes.

Finally, the fund could stimulate research and development, not only for contextually relevant development
and SDG-related applications of AI, but also for interlinking of compute and models as well as new governance
assessments.

157 We approach this recommendation with humility, infrastructures, which are built for peak usage and
conscious of the powerful market forces shaping not necessarily designed for AI. Perhaps there could
access to talent and compute, and of geopolitical be better ways to connect talent, compute and data.
competition pushing back against collaboration in
the field of science and technology. Unfortunately, 160 The purpose is, therefore, to address the underlying
many countries may be unable to access training, coordination and implementation gaps in
compute, models and training data without paragraphs 73, 80 and 81 for those unable to access
international support. Existing funding efforts might the requisite enablers through other means, to
also not be able to scale without such support. ensure that:
a. Countries in need can access AI enablers,
158 Levelling the playing field is, in part, a question putting a floor under the AI divide;
of fairness. It is also in our collective interest to b. Collaboration on AI capacity development
create a world in which all contribute to and benefit leads to habits of cooperation and mitigates
from a shared ecosystem. This is true not merely geopolitical competition;
across States. Ensuring diverse access to AI model c. Countries with divergent regulatory approaches
development and testing infrastructure would also have incentives to develop common templates
help to address concerns about the concentration of for governing data, models and applications for
disproportionate power in the hands of a handful of societal-level challenges related to the SDGs
technology companies. and scientific breakthroughs.

Fund purpose and objective 161 The capacity built with resources from the global
fund would be oriented towards the SDGs and the
shared global governance of AI (box 13). It could,
159 Our intention in proposing a fund is not to guarantee
access to compute resources and capabilities for instance, incorporate a “governance stack”
that even the wealthiest countries and companies for security and safety testing. This would help to
struggle to acquire. The answer may not always be mainstream best practices across the user base,
more compute. We may also need different ways while reducing the burden of validation for small
to leverage existing high-performance computing users.

66 Governing AI for Humanity


162 This public interest focus makes the global fund Gavi, the Vaccine Alliance, as well as lessons from
complementary to the proposal for an AI capacity commercial platforms for timeshared infrastructure.
development network, to which the fund would It should also draw lessons from bodies such as the
channel resources. The fund would also provide Global Fund (established in 2002 to pool resources
an independent capacity for monitoring of impact. to defeat HIV, tuberculosis and malaria)43 and the
In this manner, we ensure that vast swathes of the Complex Risk Analytics Fund (which pools data in
world are not left behind, but instead empowered to support of all stakeholders in crisis anticipation,
harness AI for the SDGs in different contexts. prevention and response).

163 It is in everyone’s interest to ensure that there is


Global AI data framework
cooperation in the digital world as in the physical
world. Analogies can be made to the efforts
Recommendation 6: Global AI data framework
to combat climate change, where the costs of
transition, mitigation or adaptation do not fall evenly,
We recommend the creation of a global AI
and international assistance is essential to help
data framework, developed through a process
resource-constrained countries, so that they can join
initiated by a relevant agency such as the United
the global effort to tackle a planetary challenge.
Nations Commission on International Trade Law
and informed by the work of other international
164 Here, the focus is on using financing to help to
organizations, for:
ensure that a minimum capacity can be created
a. Outlining data-related definitions and principles
in countries in different regions to understand AI’s
for global governance of AI training data,
potential for sustainable development, adapt and
including as distilled from existing best practices,
build models for local needs, and join international
and to promote cultural and linguistic diversity;
collaborative efforts on AI.
b. Establishing common standards around
AI training data provenance and use for
Fund governance transparent and rights-based accountability
across jurisdictions; and
165 The fund would source and pool in-kind c. Instituting market-shaping data stewardship
contributions, including from private sector entities. and exchange mechanisms for enabling
Coordinating financial and in-kind contributions flourishing local AI ecosystems globally, such as:
requires appropriate levels of independent oversight i. Data trusts;
and accountability. Governance arrangements ii. Well-governed global marketplaces
should be inclusive with board members drawn from for exchange of anonymized data for
government, the private sector, philanthropists, civil training AI models; and
society and United Nations agencies. They should iii. Model agreements for facilitating
incorporate scientific and expert inputs, channelled international data access and global
(for example) through the proposed international interoperability, potentially as techno-
scientific panel, and engender neutrality and trust for legal protocols to the framework.
collaboration around data and model development.

167 In our consultations, we heard that although there


Fund operations have been plenty of proposals to promote wider
access to data and data-sharing arrangements
166 The fund’s operating model should be informed to create more diverse AI ecosystems, not many
by lessons from pooled international research and have materialized so far. This is a critical gap in
development collaborations, such as CERN and developing inclusive and vibrant AI ecosystems.

43 See https://www.theglobalfund.org/en/about-the-global-fund.

Final Report 67
168 Part of the answer is in transparency on cultural, 172 We are mindful that antitrust and competition
linguistic and other traits of AI training data. policy remains domains of national and regional
Identifying underrepresented or “missing” data is authorities. However, international collective action
also helpful. Related to this is the promotion of “data can facilitate cross-border access to training data
commons” that incentivize curation of training data for local AI start-ups not available domestically.
for multiple actors. Such initiatives could create best
practices by demonstrating how design can embed 173 The United Nations is uniquely positioned to
techno-legal frameworks for privacy, data protection, support the establishment of global principles
interoperability and the equitable use of data, and and practical arrangements for the governance
human rights. and use of AI training data, building on years of
work by the data community and integrating it with
169 The data marketplaces for AI are something of a recent developments on AI ethics and governance.
“wild west” today. The idea of “grab what you can This is analogous to efforts of the United Nations
and hide it in opaque algorithms” seems to be one Commission on International Trade Law on
operating principle; another is exclusive contractual international trade, including on legal and non-legal
arrangements for access to proprietary data cross-border frameworks, and enabling digital trade
enforceable in select jurisdictions. Such exclusive and investment via model laws on e-commerce,
relationships lie behind the United Kingdom cloud-computing and identity management.
Competition and Market Authority’s concern that
“the [Frontier Model] sector is developing in ways 174 Likewise, the Commission on Science and
that risk negative market outcomes”. 44 Technology for Development and the Statistical
Commission have on their agenda data for
170 We consider it thus vital to launch a global process development and data on the SDGs. There are
that involves a variety of actors, including nations also important issues of content, copyright and
at different levels of development, supported by protection of indigenous knowledge and cultural
relevant international organizations from the United expression being considered by WIPO.
Nations family and beyond (OECD, WIPO and the
World Trade Organization), to create “guard rails” 175 The framework proposed here would be without
and “common rails” for flourishing AI training data prejudice to national or regional frameworks for
ecosystems. The outcomes of this process need data protection and would not create new data-
not be binding law but model contracts and techno- related rights nor prescribe how existing rights apply
legal arrangements. These facilitative arrangements internationally, but would have to be designed in a
can be developed one by one, as protocols to a way that prevents capture by commercial or other
framework of principles and definitions. interests that could undermine or preclude rights
protections. Rather, a global AI data framework
171 While the full details are beyond our scope, key would address transversal issues of availability,
principles for a global AI data framework would interoperability and use of AI training data. It would
include interoperability, stewardship, privacy help to build common understanding on how to
preservation, empowerment, rights enhancement align different national and regional data protection
and AI ecosystem enablement. frameworks.

44 Competition and Markets Authority, AI Foundation Models: Technical Update Report (London, 2024).

68 Governing AI for Humanity


Box 14: Securing data for training AI models: data empowerment, data
trusts and cross-border data flow arrangements

There are many circumstances in which data need to be protected (including for privacy, commercial
confidentiality, intellectual property, safety and security), but where there would also be benefits to individuals and
society in making it available for training AI models.

Data rights in law are generally rights to prevent actions in relation to data. Data privacy rights are also personal
to individuals. The constitution of data rights can make it difficult to exercise data rights in a flexible way that
enables data to be used for some purposes without losing the rights, and to do that collectively as a group.
Even when it is possible to control permissions flexibly and positively, this tends to require more time, technical
expertise and confidence than most people and organizations have.

Mechanisms that enable owners and subjects of data to allow safe and limited use of their data, while maintaining
their rights, can be described as means of data empowerment. Data empowerment can make many more people
and groups in society into active partners and stakeholders in AI, and not only subjects of data. There are already
tools in development for managing access securely, including data trusts and privacy protecting applications for
steering cross-border data flows.

Data trusts are mechanisms that make it possible for individuals and organizations to provide access to their data
collectively, with access in the control of trustees. The data-owners can set the terms for access, use and purpose,
which the trustees exercise. The owners and subjects of the data retain their legal rights while contributing to
shared objectives. An AI model trained on this data could be expected to perform more accurately than one that
lacked this specific input, and thus better serve the well-being of that particular group or of society more broadly.

Mechanisms for managing access and use, and access across borders in particular, all rely on dedicated legal
frameworks. Using these mechanisms in practice also requires adaptation to the needs and contexts of sectors
and communities. Gaps in data stewardship should be identified and closed. Successful and widespread use of
these mechanisms in the future would depend on technical assurance and maintaining the trust of contributors of
data.

We thus propose that more support is given to the further development of these tools, and to identifying the areas
where use of them for training AI could deliver the greatest public value.

176 Steps to address these issues at the national 177 Equally, such action is necessary to promote
and regional level are promising, with the public flourishing local AI ecosystems and limit further
and private sector paying more attention to best economic concentration. These measures could be
practices. Yet without a global framework governing complemented by promotion of data commons and
AI training data sets, commercial competition provisions for hosting data trusts in areas relevant
invites a race to the bottom between jurisdictions on to the SDGs (see box 14). The development of these
access and use requirements, making it difficult to templates and the actual storage and analysis
govern the AI value chain internationally. Only global of data held in commons or in trusts could be
collective action can promote a race to the top in supported by the capacity development network and
the governance of the collection, creation, use and the global fund for AI.
monetization of AI training data in ways that further
interoperability, stewardship, privacy preservation,
empowerment and rights enhancement.

Final Report 69
D. Coherent effort
c. Advising the Secretary-General on matters
related to AI, in coordination with other relevant
parts of the United Nations system to offer a
178 By promoting a common understanding, common whole-of-United Nations response.
ground and common benefits, the proposals above
seek to address the gaps identified in the emerging 181 During our consultations, it became clear that the
international AI governance regime. The gaps in case for an agency with reporting, monitoring,
representation, coordination and implementation verification and enforcement powers has not been
can be addressed through partnerships and made thus far, and there has not yet been much
collaboration with existing institutions and appetite on the part of Member States for an
mechanisms. expensive new organization.

179 However, without a dedicated focal point in 182 We, therefore, focus on the value that the United
the United Nations to support and enable soft Nations can offer, mindful of the shortcomings of
coordination among such and other efforts, and the United Nations system, as well as what could
to ensure that the United Nations system speaks realistically be achieved within a year. In this regard,
with one voice regarding AI, the world will lack the we propose a light, agile mechanism to act as the
inclusively networked, agile and coherent approach “glue” that holds together processes promoting
required for effective and equitable governance of AI. a common understanding, common ground and
common benefits, and enables the United Nations
180 For these reasons, we propose the creation of a system to speak with one voice in the evolving
small, agile capacity in the form of an AI office international AI governance ecosystem.
within the United Nations Secretariat.
183 Just as countries have set up dedicated institutes
AI office in the United and offices focused on the national, regional and
international governance of AI,45 we see the need
Nations Secretariat for a capacity that services and supports the
international scientific panel on AI and AI policy
Recommendation 7: AI office within the Secretariat
dialogue, and catalyses the AI standards exchange
and capacity development network – with lower
We recommend the creation of an AI office
overheads and transaction costs than if each were
within the Secretariat, reporting to the Secretary-
supported by different organizations.
General. It should be light and agile in organization,
drawing, wherever possible, on relevant existing
184 An AI office within the United Nations Secretariat,
United Nations entities. Acting as the “glue” that reporting to the Secretary-General, would have
supports and catalyses the proposals in this report, the benefit of connections throughout the United
partnering and interfacing with other processes and Nations system, without being tied to one part of it.
institutions, the office’s mandate would include: That is important because of the uncertain future of
a. Providing support for the proposed international AI and the strong likelihood that it will permeate all
scientific panel, policy dialogue, standards aspects of human endeavour.
exchange, capacity development network and,
to the extent required, the global fund and global 185 A small and agile AI office would be well positioned
AI data framework; to connect various domains and organizations
b. Engaging in outreach to diverse stakeholders, on AI governance issues to help to address gaps
including technology companies, civil society dynamically, working to amplify existing efforts
and academia, on emerging AI issues; and within and beyond the United Nations. By bridging

45 Including Canada, Japan, the Republic of Korea, Singapore, the United Kingdom, the United States and the European Union.

70 Governing AI for Humanity


Figure 17: Proposed role of the United Nations in the international AI
governance ecosystem
INDICATIVE, NOT EXHAUSTIVE

Common understanding Common ground Common benefits

AI Council of
GPAI OECD
summits Europe

National &
National & Group of Group of regional
regional 20 Seven Initiatives
Initiatives

Regional
SDOs
United Nations organizations AI data
engagement
framework

International Capacity
Governance Standards Global fund
scientific development
dialogue exchange for AI
panel network

United Nations as enabling connector

Abbreviations: GPAI, Global Partnership on Artificial Intelligence; OECD, Organisation for Economic Co-operation and Development;
SDOs, standards development organizations.

and connecting other initiatives, such as those led 188 This recommendation is made on the basis of a
by regional organizations and other plurilateral clear-eyed assessment as to where the United
initiatives, it can help to lower the costs of Nations can add value, including where it can lead,
cooperation between them. where it can fill gaps, where it can aid coordination
and where it should step aside, working in close
186 Such a body should champion inclusion and partnership with existing efforts (see fig. 17). It
partner rapidly to accelerate coordination and also brings the benefits of existing institutional
implementation, drawing, as a first priority, on arrangements, including pre-negotiated funding and
existing resources and functions within the United administrative processes that are well understood.
Nations system. It could be staffed in part by
United Nations personnel seconded from relevant 189 The evolving characteristics of AI technology
specialized agencies and other parts of the should be considered. There is a high probability
United Nations system. It should engage multiple of technical breakthroughs that will dramatically
stakeholders, including civil society, industry and change the current AI model landscape. Such an
academia, and develop partnerships with leading AI office should be effectively in place to adjust
organizations outside of the United Nations, such as governance frameworks to the evolving landscapes
OECD. and respond to unforeseen developments
concerning AI technology.
187 The AI office would ensure information-sharing
across the United Nations system and enable the
system to speak with authority and with one voice.
Box 15 lists possible functions and early deliverables
of such an office.

Final Report 71
Box 15: Possible functions and first-year deliverables of the AI office
The AI office should have a light structure and aim to be agile, trusted and networked. Where necessary, it should
operate in a “hub and spoke” manner to connect to other parts of the United Nations system and beyond.

Outreach could include serving as a key node in a so-called soft coordination architecture between Member
States, plurilateral networks, civil society organizations, academia and technology companies in a regime complex
that weaves together to solve problems collaboratively through networking, and as a safe, trusted place to
convene on relevant topics. Ambitiously, it could become the glue that helps to hold such other evolving networks
together.

Supporting the various initiatives proposed in this report includes the important function of ensuring inclusiveness
at speed in delivering outputs such as scientific reports, governance dialogue and identifying appropriate follow-
up entities.

Common understanding:
• Facilitate recruitment of and support the international scientific panel.

Common ground:
• Service policy dialogues with multi-stakeholder inputs in support of interoperability and policy learning.
An initial priority topic is the articulation of risk thresholds and safety frameworks across jurisdictions
• Support ITU, ISO/IEC and IEEE on setting up the AI standards exchange.

Common benefits:
• Support the AI capacity development network with an initial focus on building public interest AI capacity
among public officials and social entrepreneurs. Define the initial network vision, outcomes, governance
structure, partnerships and operational mechanisms.
• Define the vision, outcomes, governance structure and operational mechanisms for the global fund for AI,
and seek feedback from Member States, industry and civil society stakeholders on the proposal, with a
view to funding initial projects within six months of establishment.
• Prepare and publish an annual list of prioritized investment areas to guide both the global fund for AI and
investments outside that structure.

Coherent effort:
• Establish lightweight mechanisms that support Member States and other relevant organizations to be
more connected, coordinated and effective in pursuing their global AI governance efforts.
• Prepare initial frameworks to guide and monitor the AI office’s work, including a global governance risk
taxonomy, a global AI policy landscape review and a global stakeholder map.
• Develop and implement quarterly reporting and periodic in-person presentations to Member States on
the AI office’s progress against its workplan and establish feedback channels to support adjustments as
needed.
• Establish a steering committee jointly led by the AI office, ITU, UNCTAD, UNESCO and other relevant
United Nations entities and organizations to accelerate the work of the United Nations in service of the
functions above, and review progress of the accelerated efforts every three months.
• Promote joint learning and development opportunities for Member State representatives to support them
to carry out their responsibilities for global AI governance, in cooperation with relevant United Nations
entities and organizations such as the United Nations Institute for Training and Research and the United
Nations University.

72 Governing AI for Humanity


E. Reflections on An international AI agency?
institutional models 195 If the risks of AI become more serious, and
more concentrated, it might become necessary
190 Discussions about AI often resolve into extremes. for Member States to consider a more robust
In our consultations around the world, we international institution with monitoring, reporting,
engaged with those who see a future of boundless verification, and enforcement powers.
opportunities provided by ever-cheaper, ever-more-
helpful AI systems. We also spoke with those wary 196 There is precedent for such evolution. From the
of darker futures, of division and unemployment, and Hague Conventions of 1899 and 1907, to the 1925
even extinction. Geneva Protocol, and culminating in the Chemical
Weapons Convention in 1993, dual-use chemicals
191 We do not know what the future may transpire. We have long been subject to limits on access, with
are mindful that the technology may go in a direction protocols for storage and usage, and a ban on
that does away with this duality. In this report, we weaponization.
have focused on the near-term opportunities and
risks, based on science. The recommendations 197 Biological weapons have also been banned,
outlined herein offer our best hope for reaping the along with periodic limits on research, such as
benefits of AI while minimizing and mitigating the the limits on recombinant DNA or gene-splicing
risks. We are also mindful of the practical challenges in 1975. These emphasized containment as an
to international institution-building on a larger essential consideration in experiment design, with
scale. This is why we are proposing a networked the level of containment tied to the estimated
institutional approach with light and agile support. risk. Certain classes of high-risk experiment for
which containment could not be guaranteed were
192 If or when risks become more acute and the essentially prohibited. Other examples included
stakes for opportunities escalate, however, such research that threaten to cross fundamental ethical
calculations will change. The world wars led to the lines, such as ongoing restrictions on human cloning
modern international system; the development of – an example of the kind of “red line” that may one
ever-more-powerful weapons led to regimes limiting day be needed in the context of AI research, along
their spread and promoting peaceful uses of the with effective cooperation regarding enforcement.
underlying technologies.
198 Continued scientific assessments are also a feature
193 Evolving understanding of our common humanity led of some of these frameworks, for example the
to the modern human rights system and our ongoing Scientific Advisory Board of the Organisation for the
commitments to the SDGs for all. Climate change Prohibition of Chemical Weapons and article XII of
evolved from a niche concern to a global challenge. the Biological Weapons Convention.
AI may similarly rise to a level that requires more
resources and more authority than proposed in this 199 The comparison between AI and nuclear energy is
report. well known. From the day the atom was split, it was
clear to scientists that this technology could be used
194 Our terms of reference included considering for good – even though their research was directed
the functions, forms and timelines for a new at constructing a new and terrible weapon. Then,
international agency for AI. We conclude the present as now, it was telling that leading scientists were
report with some reflections on the issue, although among those who called most ardently for a limit on
we do not currently recommend establishing such this new technology.
an agency.

Final Report 73
200 The grand bargain at the heart of the International 205 Eventually, some kind of mechanism at the global
Atomic Energy Agency (IAEA) was that nuclear level might become essential to formalize red
energy’s beneficial purposes could be shared – in lines if regulation of AI needs to be enforceable.
energy production, agriculture and medicine – in Such a mechanism might include formal CERN-
exchange for guarantees that it would not be further like commitments for pooling resources for
weaponized. As the nuclear non-proliferation regime collaboration on AI research and sharing of benefits
shows, good norms are necessary but not sufficient as part of the bargain.
for effective regulation.
206 Given the speed, autonomy and opacity of AI
201 The limits of the analogy are clear. Nuclear energy systems, however, waiting for a threat to emerge
involves a well-defined set of processes related to may mean that any response will come too late.
specific materials that are unevenly distributed, and Continued scientific assessments and policy
much of the materials and infrastructure needed to dialogue would ensure that the world is not
create nuclear capability are controlled by nation surprised. Any decision to begin a formal process
States. AI is an amorphous term; its applications are would, naturally, lie with Member States.
extremely wide and its most powerful capabilities
span industry and States. The grand bargain of IAEA 207 Possible thresholds for such a move could include
focused on weapons that are expensive to build and the prospect of uncontrollable or uncontainable
difficult to hide; weaponization of AI promises to be AI systems being developed, or the deployment
neither. of systems that are unable to be traced back to
human, corporate or State actors. They could also
202 An early idea – pooling of nuclear fuel for peaceful include indications that AI systems exhibit qualities
purposes – did not work out as planned. On that suggest the emergence of “superintelligence”,
the pooling of resources for sharing benefits of although this is not present in today’s AI systems.
technology, a more AI-appropriate analogy may be
CERN, which pools funding, talent and infrastructure. 208 Establishing a watching brief, drawing on diverse
However, there are limits to the comparison, given the and distinguished experts to monitor the horizon,
difference between experimental fundamental physics is a reasonable first step. The scientific panel could
and AI, which requires a more distributed approach. be tasked with commissioning research on this
question, as part of its quarterly research digest
203 Another imperfect analogy is organizations such as series. Over time, the policy dialogue could be an
the International Civil Aviation Organization (ICAO) appropriate forum for sharing information about AI
and the International Maritime Organization (IMO). incidents, such as those that stretch or exceed the
The underlying technologies of transportation are capacities of existing agencies, analogous to the
well established, and their civilian applications practices of IAEA for mutual reassurance on nuclear
can be easily demarcated from military ones – safety and nuclear security, or the World Health
this is not the case with general-purpose AI. The Organization (WHO) on disease surveillance.
network of national regulatory authorities that
apply the international norms developed in the 209 The functions of a proposed international AI agency
framework of ICAO and IMO is also well established. could draw on the experience of relevant agencies,
Safety, facilitation of commercial activity, and such as IAEA, the Organisation for the Prohibition
interoperability are in focus. Compliance is not of Chemical Weapons, ICAO, IMO, CERN and the
handled in a top-down manner. Biological Weapons Convention. They could include:
• Developing and promulgating standards and
204 There are other approaches to compliance that can norms for AI safety;
inspire. Financial risk management benefits from • Monitoring AI systems that have the potential
mechanisms such as the Financial Stability Board to threaten international peace and security,
(FSB) and the Financial Action Task Force (FATF), or cause grave breaches of human rights or
without recourse to treaties. international humanitarian law;

74 Governing AI for Humanity


• Receiving and investigating reports of incidents • Promoting international cooperation for
or misuses, and reporting on serious breaches; peaceful uses of AI.
• Verifying compliance with international
obligations; 210 A tailored approach to designing any future AI
• Coordinating accountability, emergency agency would be required, drawing on lessons from
responses and remedies for harm regarding AI other institutions as appropriate (see box 16).
safety incidents;

Box 16: Lessons learned from past global governance institutions


AI is a unique set of technologies with risks and societal impacts that transcend borders. However, it is not the
first set of technologies that have led to global AI governance arrangements. Civil aviation, climate change,
nuclear power and terrorism finance are also complex and multidimensional domains that have warranted a global
response.

Some of these domains, such as civil aviation, climate change and nuclear power, have led to the creation of new
United Nations institutions. Others, notably the protection of global financial flows, have led to bodies that are
not treaty-based and yet they have delivered robust normative frameworks, effective market-based enforcement
mechanisms and strong public-private partnerships.

As we draw parallels between these institutional responses and nascent efforts to do the same for AI, we should
not focus too heavily on which institutional analogue is most suitable for the AI problem set. Our interim report
foreshadowed that we should look instead at which governance functions are needed for effective and inclusive
global AI governance, and what we can learn from past global governance endeavours.

One lesson is that the development of a shared scientific and technical understanding of the problem is necessary
to trigger a commonly accepted policy response. Here, IPCC, which continues to address the risks of climate
change, is a useful model. It offers an example of how an inclusive approach to crafting reports and developing
scientific consensus in a constantly evolving area can level the playing field for researchers and policymakers
and create the shared understanding that is essential for effective policymaking. The process of drafting and
disseminating IPCC reports and global stock takes, although not without challenges, has been centrally important
to building a shared understanding and common knowledge base, lowering the costs of cooperation and steering
the Conference of the Parties to the United Nations Framework Convention on Climate Change towards concrete
policy deliverables.

For AI, as the technology evolves, it will be just as important to develop a shared scientific understanding. As the
capabilities of AI systems continue to advance and potential risks may exceed known effective approaches to
mitigating them, the international scientific panel could be evolved to match emerging needs.

A second lesson is that multi-stakeholder collaboration can deliver strong standards and promote quick
responses. Here, ICAO and FATF offer useful examples of how to govern a highly technical issue across borders.
In civil aviation, the ICAO safety and security standards, developed by industry and government experts and
enforced through market access restrictions, ensure that a plane that takes off from, for example, New York can
land in Geneva without triggering new safety audits. A combination of ICAO-led safety audits and Member State-
driven audits ensure consistent implementation, even as the technology evolves.

FATF – established by the G7 in 1989 to address money-laundering – offers another example of how soft law
institutions can promote common standards and implementation. Its peer review system for monitoring is

Final Report 75
Box 16: Lessons learned from past global governance institutions
(continued)

flexible; and widespread acceptance of its recommendations has created reputational costs for those companies
and Member States that fail to comply. Even as the risks to international financial flows have evolved, most
significantly with the rise of terrorism and proliferation finance, the nimble structure and normative framework of
FATF have allowed it to respond quickly and keep pace with complex challenges.

In their own unique ways, both ICAO and FATF have created widely recognized international standards, domestic
frameworks for measuring compliance, and interoperable systems for responding to certain classes of risks and
challenges that manifest across jurisdictions. ICAO enforces via market access incentives and restrictions, while
FATF creates reputational risk for non-compliance. Both offer useful templates for AI, as they demonstrate how
governments and other stakeholders can work together to create a web of interconnected norms and regulations
and create costs for non-compliance.

A third lesson is that global coordination is often vital for monitoring and taking action in response to severe risks
with the potential for widespread impact. FSB and IAEA models offer key examples. Established in 2009, FSB was
created by the G20 countries to monitor and warn against systemic risks to the international financial system.
Its unique composition of G20 finance officials and international financial and development organizations has
allowed it to be nimble, adept and inclusive when coordinating efforts to identify global financial risks.

The IAEA approach to nuclear safeguards offers a different model. Its comprehensive safeguards agreements,
signed by 182 States, are part of the most wide-ranging United Nations regime for ensuring compliance. By using
a combination of inspections and monitoring – as well as the threat of Security Council action – IAEA offers
perhaps the most visible censure of Member States who fail to comply.

Both FSB and IAEA demonstrate how international coordination is fundamental to monitoring severe risks. As
the risks of AI become clearer and more pronounced, there may be a similar need to create a new AI-focused
institution to maximize coordination efforts and monitor severe and systemic risks, so that Member States can,
wherever possible, intervene to stay ahead of those risks.

A fourth lesson is that it is important to create inclusive access to the resources needed for research and
development, along with their benefits. The experiences of CERN and IAEA are both instructive. CERN brings
together world-class scholars and physicists to perform complex research into particle accelerators and other
projects that are meant to benefit humanity. It also offers training to physicists and engineers.

Similarly, IAEA facilitates access to technology, in this case nuclear energy and ionizing radiation. The basic
trade-off is simple: Member States comply with nuclear safeguards and IAEA offers technical assistance towards
the use of peaceful nuclear power. In this regard, IAEA provides an inclusive approach to spreading the benefits
of technology to developing countries. Its facilitation of a network of centres of excellence on nuclear security is
similar to our recommendation for a networked approach to capacity-building.

As we have explained above, AI is a set of technologies whose benefits need to be shared in a more inclusive
and equitable manner, especially with countries in the global South. This is why we have recommended both an
AI capacity development network and a global fund for AI. As we learn more about AI through the work of the
international scientific panel, and as the responsible deployment of AI in support of the SDGs becomes even more
pressing, United Nations Member States may want to institutionalize this function more widely. If they do so, they
should look to draw lessons from CERN and IAEA as useful models for supporting broader access to resources,
as part of an overall global AI governance structure.

76 Governing AI for Humanity


5. Conclusion: a call to action
211 As experts, we remain optimistic about the future 214 Sometimes we hesitated: Should we be pragmatic
of AI and its potential for good. That optimism and focus on what seems feasible? Or should we
depends, however, on realism about the risks aim high with lofty ambition? In the end, we resolved
and the inadequacy of structures and incentives to do both. Our proposals reflect a comprehensive
currently in place. We also need to be realistic about vision for an equitable and effective global AI
international suspicions that could get in the way governance regime, with careful thought on how it
of the global collective action needed for effective can be implemented, step by step.
and equitable governance. The technology is too
important, and the stakes are too high, to rely only 215 We are grateful to the many people, organizations
on market forces and a fragmented patchwork of and Member States that have contributed to our
national and multilateral action. deliberations, including the representatives of United
Nations agencies and Secretariat personnel who
212 We need to be active and purposeful. Beyond the offered discerning assessments of the capabilities
duality of opportunity and risk is the challenge of and the limitations of the United Nations in this
rapid and cross-cutting change. AI’s downstream complex area. The issue of AI governance is not only
impact may leave few people untouched. To place about managing the implications of this technology.
its governance in the hands of a few developers, or Also at stake is the future of multilateral and multi-
the countries that host them, will create a deeply stakeholder cooperation.
unfair situation where the impacts of developing,
deploying and using AI are imposed on most people 216 When we look back in five years, the technology
without their having any say in the decisions for landscape could appear drastically different from
doing so. today. However, if we stay the course and overcome
hesitation and doubt, we can look back in five years
213 The past year of global attention and discussion at an AI governance landscape that is inclusive and
on AI governance has given us hope. There are empowering for individuals, communities and States
divergences across countries and sectors, but also a everywhere. It is not technological change itself, but
strong desire for dialogue. Engaging diverse experts, how humanity responds to it, that ultimately matters.
policymakers, businesspeople, researchers and
advocates – across regions, genders and disciplines 217 We believe that the functions and forms
– has shown us that diversity need not lead to recommended in this report, if implemented in good
discord, and dialogue can lead to common ground faith, can deliver an agile and adaptable regime that
and collaboration. stays in step with AI’s march and helps to reap its
benefits and address its risks. They can help us to
spot problems and opportunities in time, use shared
principles and frameworks to align international
action, promote international cooperation, and build
capacity of individuals and institutions to deal with
change.

Final Report 77
218 The implementation of the recommendations in the
present report may also encourage new ways of
thinking: a collaborative and learning mindset, multi-
stakeholder engagement and broad-based public
engagement. The United Nations can be the vehicle
for a new social contract for AI that ensures global
buy-in for a governance regime that protects and
empowers us all. Such a contract will ensure that
opportunities are fairly accessed and distributed,
and the risks are not loaded onto the most
vulnerable – or passed on to future generations, as
we have seen tragically with climate change.

219 As a group and as individuals from across many


fields of expertise, organizations and parts of the
world, we look forward to continuing this crucial
conversation. Together with the many we have
connected with on this journey, and the global
community that they represent, we hope that this
report contributes to our combined efforts to govern
AI for humanity.

78 Governing AI for Humanity


Annexes

Annex A: Members of the High-level Advisory Body on


Artificial Intelligence
Anna Abramova Hiroaki Kitano

Omar Sultan Al Olama Haksoo Ko

Latifa Al-Abdulkarim Andreas Krause

Estela Aranha James Manyika (Co-Chair)

Carme Artigas (Co-Chair) Maria Vanina Martinez Posse

Ran Balicer Seydina Moussa Ndiaye

Paolo Benanti Mira Murati

Abeba Birhane Petri Myllymäki

Ian Bremmer (Co-Rapporteur) Alondra Nelson

Anna Christmann Nazneen Rajani

Natasha Crampton Craig Ramlal

Nighat Dad Emma Ruttkamp-Bloem

Vilas Dhar Marietje Schaake (Co-Rapporteur)

Virginia Dignum Sharad Sharma

Arisa Ema Jaan Tallinn

Mohamed Farahat Philip Thigo

Amandeep Singh Gill Jimena Sofia Viveros Álvarez

Wendy Hall Zeng Yi

Rahaf Harfoush Zhang Linghan

Ruimin He

Final Report 79
Annex B: Terms of reference of the
High-level Advisory Body on Artificial
Intelligence
The High-level Advisory Body on Artificial Intelligence, convened by the United Nations
Secretary-General, will undertake analysis and advance recommendations for the
international governance of artificial intelligence. The Body’s initial reports will provide high-
level expert and independent contributions to ongoing national, regional, and multilateral
debates.

The Body will consist of 38 members from governments, private sector, civil society, and
academia, as well as a member Secretary. Its composition will be balanced by gender, age,
geographic representation, and area of expertise related to the risks and applications of
artificial intelligence. The members of the Body will serve in their personal capacity.

The Body will engage and consult widely with governments, private sector, academia, civil
society, and international organizations. It will be agile and innovative in interacting with
existing processes and platforms as well as in harnessing inputs from diverse stakeholders.
It could set up working parties or groups on specific topics.

The members of the Body will be selected by the Secretary-General based on nominations
from Member States and a public call for candidates. It will have two Co-Chairs and
an Executive Committee. All stakeholder groups will be represented in the Executive
Committee.

The Body shall be convened for an initial period of one year, with the possibility of extension
by the Secretary-General. It will have both in-person and online meetings.

The Body will prepare a first report by 31 December 2023 for the consideration of the
Secretary-General and the Member States of the United Nations. This first report will present
a high-level analysis of options for the international governance of artificial intelligence.

Based on feedback to the first report, the Body will submit a second report by 31 August
2024 which may provide detailed recommendations on the functions, form, and timelines for
a new international agency for the governance of artificial intelligence.

The Body shall avoid duplication with existing forums and processes where issues of
artificial intelligence are considered. Instead, it shall seek to leverage existing platforms
and partners, including UN entities, working in related domains. It shall fully respect current
UN structures as well as national, regional, and industry prerogatives in the governance of
artificial intelligence.

The deliberations of the Body will be supported by a small secretariat based in the Office
of the Secretary-General’s Envoy on Technology and be funded by extrabudgetary donor
resources.

80 Governing AI for Humanity


Annex C: List of consultation engagements in 2024
Engagement Date, 2024 Region
UNESCO Slovenia 5 Jan. Europe
Secretary-General’s Scientific Advisory Board 8 Jan. Global
Presentation to Member States on the interim report 12 Jan. Global
World Economic Forum in Davos 24 Jan. Europe
Association of Southeast Asian Nations (ASEAN) Digital Senior Officials’ Meeting 30 Jan. Asia
World Government Summit 12 Feb. Middle East
Montreal Institute for Learning Algorithms (Mila - Quebec AI Institute) 14 Feb. North America
Berlin Consultation 15 Feb. Europe
Euro-Asian IT Forum 20 Feb. Global
Mobile World Congress 26 Feb. Europe
Moscow State Institute of International Relations 28 Feb. Europe
Royal Society workshop on international AI governance 28 Feb. Europe
Foreign Ministries Science & Technology Advice Network 28 Feb. Global
OECD-African Union AI dialogue 4 Mar. Europe
Brussels Consultation 5 Mar. Europe
World Bank, Global Digital Summit 5 Mar. North America
Open Science and Artificial Intelligence: ethical issues webinar 5 Mar. Eastern Europe
UNESCO Digital Transformation Dialogue 6 Mar. Europe
Inter-Parliamentary Union 6 Mar. Global
47th session of the High-level Committee on Programmes 11 Mar. Global
Global Youth Summit on Digital Rights 13 Mar. Latin America
Group of Seven (G7) summit on AI in Trento, Italy 15 Mar. Europe
Kick-off consultative network meetings, 18–19 March 18 Mar. Global
68th session of the Commission on the Status of Women 21 Mar. North America
Advisory Body update to Member States 25 Mar. Global
African Observatory on Responsible AI 25 Mar. Africa
AI for sustainable and inclusive futures conference - French Development Agency 26 Mar. Europe
Shaping Global Norms: collective feedback 28 Mar. Africa
Innovate Switzerland 2 Apr. Europe
OSET visit to China, 9–12 April 9 Apr. Asia
Russian Internet Governance Forum 9 Apr. Eastern Europe
Wharton Cypher Days - Finance 12 Apr. North America
Silicon Valley visit 15 Apr. North America
Stanford, AI+Policy Symposium: A Global Stocktaking 16 Apr. North America
United Nations Commission on Science and Technology for Development 16 Apr. Europe
Group of 20 (G20) Digital Economy, 16–18 April, Brazil 17 Apr. Latin America
Advisory Body update to Member States 22 Apr. Global
United Nations University, Macau AI Conference, 24–25 April 24 Apr. Asia
OSET visit to Brussels and Paris, 25–26 April 26 Apr. Europe
Advisory Body presentation to National AI Advisory Committee (United States) 2 May North America
Global Artificial Intelligence (GAIN) Assembly in Riyadh, with the Islamic World Educational, Scientific and Cultural Organization (53 countries, 4 regions) 14 May Middle East
AI in interests of sustainable development: Kazakhstan’s contribution to the 2030 Agenda 20 May Asia
Group of Latin American and Caribbean States 21 May Latin America
BRICS Academic Forum 22 May Global
AI governance session in Seoul 23 May Asia
Tech Summit Asia, Singapore, 29–31 May 29 May Asia
AI for Good Global Summit, 29–31 May 29 May Europe

Final Report 81
Annex D: List of “deep dives”
Domain Date (Eastern Daylight Time)

Education 29 March

Intellectual property and content 2 April

Children 4 April

Peace and security (1) 12 April

Peace and security (2) 29 April

Agriculture (session 1) 30 April

Agriculture (session 2) 30 April

Faith-based 1 May

Open-source and technology direction 1 May

Impact on society 3 May

Gender 7 May

Data 13 May

Future of work 13 May

Standards (session 1) 14 May

Standards (session 2) 14 May

Peace and security (3) 20 May

Environment 20 May

Health 22 May

Rule of law, human rights and democracy 24 May

82 Governing AI for Humanity


Annex E: Risk Global Pulse Check responses
On the request of the High-level Advisory Body on Artificial Intelligence, the Office of the Secretary-General’s Envoy
on Technology (OSET) conducted an AI Risk Global Pulse Check survey, as part of a horizon-scanning exercise on
AI to capture perceptions on AI risks from experts from around the world. Experts were asked to respond with their
views in their personal capacity (not on behalf of their institution or employer). Experts were asked to rate the degree
to which they expected AI technical change and (separately) AI adoption and application to accelerate or decelerate.

They were also asked to rate their overall level of concern that harms (existing or new) resulting from AI would
become substantially more serious and/or widespread, and how much that concern had recently increased or
decreased. Respondents were given a list of 14 sample areas of harm (such as “Intentional malicious use of AI
by non-State actors”) to rate their level of concern. Finally, many text-response prompts were provided, inviting
experts to comment on emerging trends, and individuals, groups and (eco)systems at particular risk from AI, and to
elaborate on their rated answers.

The survey was fielded from 13 to 25 May 2024, with the invitee list constructed from OSET and the Advisory Body’s
networks, including participants in Advisory Body deep dives. During the fielding period, additional experts were
continually invited, particularly from regions often less represented in discussions around AI, based on referrals from
initial respondents and outreach to regional networks. More than 340 respondents replied to the survey, providing a
rich and diverse perspective (including across regions and gender) on risks posed by AI.

Overview of sample

Split by gender and region is evenly balanced


Univariate analysis by gender and region is not immediately contaminated by the other variable.

Respondents by region of nationality* (n = 348) BY REGION & GENDER

175 173

Man 96 95

Woman 78 77

Non-binary
1 1
WEOG nationality Non-WEOG

* 43 respondents (12%) indicated multiple nationalities. If respondents were resident in one of their countries of nationality, that nationality was used for analysis (34 of 43).
Otherwise, the least represented nationality was used (9 of 43).
Source: OSET AI Risk Pulse Check, 13-25 May 2024.

Final Report 83
Sample remains global if considered by residence
84% of respondents reside in the same region as their nationality.

Respondents by region of nationality* (n = 348) BY REGION

175 (50%)
WEOG
198 (57%)

67 (19%) + 38 non-WEOG nationals reside in WEOG


Africa - 15 WEOG nationals reside in other regions
58 (17%) = net difference of 23 respondents

63 (18%)
Asia-Pacific
54 (16%) Nationality*
Residence
Latin America and 30 (9%)
the Caribbean 28 (8%)

13 (4%)
Eastern Europe
10 (3%)

* 43 respondents (12%) indicated multiple nationalities. If respondents were resident in one of their countries of nationality, that nationality was used for analysis (34 of 43).
Otherwise, the least represented nationality was used (9 of 43).
Source: OSET AI Risk Pulse Check, 13-25 May 2024.

Respondents by region of nationality* (n = 348)


The Western European
and Others Group (WEOG)
includes western Europe
plus Australia, Canada,
WEOG 96 78 1 175 (50%) Israel, New Zealand,
Türkiye and the United
States of America

Africa 38 29 67 (19%)
United States
72 (21%)

Asia-Pacific 36 26 1 63 (18%)
348
United Kingdom
respondents
from 17 (5%)
Latin America and
14 16 30 (9%) Men 68 India 16 (5%)
the Caribbean Women countries
Non-binary Canada 14 (4%)
China 14 (4%)
Eastern Europe 7 6 13 (4%) Germany 13 (4%)
South Africa 11 (3%)

* 43 respondents (12%) indicated multiple nationalities. If respondents were resident in one of their countries of nationality, that nationality was used for analysis (34 of 43).
Otherwise, the least represented nationality was used (9 of 43).
Source: OSET AI Risk Pulse Check, 13-25 May 2024.

84 Governing AI for Humanity


Profiles of men and women respondents have some differences
More men report technical expertise; more women report governance, policy, law/ethics.

Profiles of WEOG and non-WEOG respondents are reasonably similar


Non-WEOG respondents are more likely to be in the public sector or academia than in the private sector or industry.

% of respondents reporting affiliation / expertise by region of nationality* (n = 348) BY REGION

Affiliation Expertise

WEOG
Non-WEOG 75% 76%

54%
46%
43%
39% 38%
35% 34% 36% 34%
33%
30% 29% 27%
25%
21%
15%

Private sector Public sector Academia Civil society Technical Implement / Scientific or Government / Government /
/ industry / government expertise commercialize technical politics / law politics / law /
training / new AI expertise (not / ethics on AI ethics (not AI
developing AI technology AI specific) / technology / technology
* 43 respondents (12%) indicated multiple nationalities. If respondents were
resident in one of their countries of nationality, that nationality was used for analysis specific)
(34 of 43). Otherwise, the least represented nationality was used (9 of 43).
Source: OSET AI Risk Pulse Check, 13-25 May 2024.

Final Report 85
Perceptions regarding acceleration of AI

74% of respondents expect acceleration of technical change


Higher percentage of non-WEOG respondents expect acceleration compared with WEOG respondents.

1 2 3 4 5

Excludes “Don’t know” / “No opinion” and blank responses.


OSET AI Risk Pulse Check, 13-25 May 2024.

89% of respondents expect acceleration of adoption and application


Slightly more non-WEOG respondents expect substantial acceleration (especially men).

“In the next 18 months, compared to the last 3 months, do BY REGION & GENDER
you expect the pace of adoption and application of AI (e.g.
new uses of AI in business / government) to:” (n = 348) 5 Substantially accelerate 3 Remain same 1 Substantially decelerate
4 Accelerate 2 Decelerate
1 2 3 4 5
Total
average: Men 11%
0%
0% 52% 37% 4.27
4.24 / 5
10%
Women 0%
1% 59% 30% 4.19
34%
11%
WEOG 0%
0% 61% 28% 4.17

9%
Non-WEOG 0%
1% 49% 41% 4.31
55%
11%
Men, WEOG 0%
0% 60% 28% 4.17

10% Women, WEOG 0%


10%
64% 26% 4.16
0% 0% 0%

10%
Men, Non-WEOG 0%
0% 44% 47% 4.37

9%
Women, Non-WEOG 0%
1% 54% 35% 4.23
Note: Excludes “Don’t know” / “No opinion” and blank responses.
Source: OSET AI Risk Pulse Check, 13-25 May 2024.

86 Governing AI for Humanity


Limited impact from technical expertise (training / developing AI)
Respondents are slightly more pessimistic on technical change, and slightly more optimistic on adoption and application.

BY EXPERTISE
Technical change Adoption & application
“In the next 18 months, compared to the last 3 months, do you expect “In the next 18 months, compared to the last 3 months, do you expect
the pace of technical change in AI (e.g. development / release of new the pace of adoption and application of AI (e.g. new uses of AI in
models) to...” (n = 348) business / government) to...” (n = 348)

94%
87%
76% 34%
71%
35%
30%
30% 5 Substantially accelerate 5 Substantially accelerate
4 Accelerate 4 Accelerate
3 Remain same 3 Remain same
61%
46% 2 Decelerate 52% 2 Decelerate
41%
1 Substantially decelerate 1 Substantially decelerate

0% 0% 6% 13%
20% 21% 0% 0%

7% 0% 3%
1%

Reports technical Doesn’t report Reports technical Doesn’t report


expertise training expertise training
/ developing AI / developing AI

Note: Numbers may not add up to 100% owing to rounding. Excludes “Don’t know” / “No opinion” and blank responses.
Source: OSET AI Risk Pulse Check, 13-25 May 2024.

Perceptions regarding risks of AI harms in the next 18 months (from


May 2024)

71% concerned/very concerned about AI harms in the next 18 months


African respondents are more concerned than others; Asia-Pacific respondents are less concerned than WEOG.

“What is your current level of concern that harms (existing or new) BY REGION
resulting from AI will become substantially more serious and/or
widespread in the next 18 months for each area?” (n = 348) 5 Very concerned 3 Somewhat concerned 1 Not concerned
4 Concerned 2 Slightly concerned
1 2 3 4 5

Total WEOG 2% 10% 16% 35% 37% 3.93


average:
3.98 / 5

Africa 2% 22% 15% 58% 4.26


40%
3%

Latin America
and the 0% 20% 33% 40% 4.07
31% Caribbean
7%

0%
18% Eastern Europe 8% 23% 54% 4.00
15%
9%
2%

Asia-Pacific 2% 13% 21% 35% 30% 3.79

Note: Excludes “Don’t know” / “No opinion” and blank responses.


Source: OSET AI Risk Pulse Check, 13-25 May 2024.

Final Report 87
Non-WEOG more concerned than WEOG in most example areas
Particularly large gaps in inaccurate information, unintended autonomous actions and intentional corporate use.

“Please rate your current level of concern that harms BY REGION


(existing or new) resulting from AI will become
substantially more serious and/or widespread in the next WEOG Non-WEOG Non-WEOG - WEOG
18 months for each area.” (n = 348)
1 2 3 4 5 1 2 3 4 5 0.0 0.5

j. Damage to information integrity 4.28 4.13 -0.15

b. Intentional use of AI in armed conflict by State actors 4.03 4.20 0.17


h. Inequalities arising from differential control
4.06 4.14 0.08
and ownership over AI technologies
a. Intentional malicious use of AI by non-State actors 4.00 4.05 0.05
l. Discrimination / disenfranchisement, particularly
3.79 3.98 0.20
against marginalized communities
c. Intentional use of AI by State actors that harms individuals 3.80 3.85 0.05

m. Human rights violations 3.70 3.86 0.16

k. Inaccurate information / analysis provided by AI in critical fields 3.53 3.88 0.35


d. Intentional use of AI by corporate actors
3.54 3.83 0.29
that harms customers / users
i. Violation of intellectual property rights 3.49 3.62 0.13

n. Environmental harms 3.51 3.57 0.06

g. Harms to labour from adoption of AI 3.36 3.54 0.18


e. Unintended autonomous actions by AI systems
2.95 3.28 0.34
[excl. autonomous weapons]
f. Unintended multi-agent interactions among AI systems 3.04 2.98 -0.06

Shown: Average, where: 1 = Not concerned, 2 = Slightly concerned, 3 = Somewhat concerned, 4 = Concerned, 5 = Very concerned.
Note: Excludes “Don’t know” / “No opinion” and blank responses. Source: OSET AI Risk Pulse Check, 13-25 May 2024.

Many concerns highest in Africa and in Latin America and the Caribbean
Especially around State use in armed conflict, enabling discrimination or human rights violations.

5
4.45
4.43

4.29
4.23

4.12
4.28

4.24

3.71
4.20

4.20
4.13

4.02

3.95

4.17

4.10

4.10

3.90
4.08
4.06
4.03

4.03

4.03
4.00

4.00

3.96
3.94
3.93

3.87

3.87
3.85

3.85

3.85

3.80
3.79

4
3.77

3.77

3.75
3.74

3.70
3.69

3.69

3.33
3.62

3.58
3.57

3.54
3.54

3.54
3.53

3.51
3.51

3.49
3.46

3.46
3.38

3.38

3.37
3.36

3.32

3.31

3.02
3.23

3.23

3.23

3.00
3.04
2.95

2.93
2.85

2.85

88 Governing AI for Humanity


-0.75
-0.50
-0.25
0.00
0.25
0.50
0.75
0.07
j. Damage to 0.02
-0.17
information integrity -0.08

9%
-0.36

area.” (n = 348)
b. Intentional use of -0.09
0.33
AI in armed conflict 0.32
-0.18
by state actors -0.42

h. Inequalities arising from -0.04


differential control and 0.19
0.10
ownership over -0.08
-0.25
AI technologies

a. Intentional -0.03
0.10
malicious use of AI 0.17
-0.07

OSET AI Risk Pulse Check, 13-25 May 2024.


by non-state actors
-0.18
l. Discrimination / -0.10
disenfranchisement, 0.36
0.29
particularly against -0.14

Excludes “Don’t know” / “No opinion” and blank responses.


-0.50
marginalized communities
or new) resulting from AI will become substantially more

c. Intentional use of -0.02


serious and/or widespread in the next 18 months for each

0.28
AI by state actors 0.18
“Please rate your current level of concern that harms (existing

-0.25
that harms individuals
-0.36

Women more concerned than men, particularly in WEOG.


-0.08
m. Human rights 0.30
0.32
violations -0.27
-0.01

k. Inaccurate information -0.17


0.32
/ analysis provided by 0.01
0.16
AI in critical fields -0.17

d. Intentional use of AI -0.15


Africa
WEOG

by corporate actors 0.26


0.22
that harms customers 0.09
-0.30
/ users

i. Violation of -0.06
0.19
intellectual -0.32
0.13
property rights -0.02
Asia-Pacific

1
-0.03
Very concerned. Note: Excludes “Don’t know” / “No opinion” and blank responses. Source: OSET AI Risk Pulse Check, 13-25 May 2024.

n. Environmental 0.33
0.08
harms -0.30
-0.08

2
Especially around State use in armed conflict, enabling discrimination or human rights violations.

-0.09
g. Harms to labour 0.13
0.51
from adoption of AI -0.08
Latin America & the Caribbean

-0.22

3
e. Unintended -0.17
autonomous actions 0.20
0.22
by AI systems [Excl. 0.20
-0.27
71% concerned / very concerned about AI harms in the next 18 months

autonomous weapons]
4
Many concerns highest in Africa and in Latin America and the Caribbean

f. Unintended 0.03
0.00
multi-agent interactions -0.08
– smaller sample

-0.01
BY REGION

Shown: difference between aggregate (all regions) rating and indicated region’s rating where: 1 = Not concerned, 2 = Slightly concerned, 3 = Somewhat concerned, 4 = Concerned, 5 =

Final Report
Interpret with caution

among AI systems
Eastern Europe

-0.17
5

89
Women more concerned than men about all example areas
There are particularly large gaps on human rights violations, discrimination and the environment.

“Please rate your current level of concern that harms BY GENDER


(existing or new) resulting from AI will become
substantially more serious and/or widespread in the Men Women Women - Men
next 18 months for each area.” (n = 348)
1 2 3 4 5 1 2 3 4 5 0.0 0.5 1.0

j. Damage to information integrity 4.02 4.43 0.41

b. Intentional use of AI in armed conflict by State actors 4.05 4.20 0.15


h. Inequalities arising from differential control
3.95 4.28 0.33
and ownership over AI technologies
a. Intentional malicious use of AI by non-State actors 4.01 4.06 0.05
l. Discrimination / disenfranchisement, particularly
3.61 4.21 0.60
against marginalized communities
c. Intentional use of AI by State actors that harms individuals 3.70 3.98 0.29

m. Human rights violations 3.49 4.12 0.63

k. Inaccurate information / analysis provided by AI in critical fields 3.50 3.97 0.47


d. Intentional use of AI by corporate actors
3.48 3.92 0.44
that harms customers / users
i. Violation of intellectual property rights 3.43 3.72 0.29

n. Environmental harms 3.29 3.83 0.54

g. Harms to labour from adoption of AI 3.37 3.53 0.16


e. Unintended autonomous actions by AI systems
3.01 3.26 0.25
[excl. autonomous weapons]
f. Unintended multi-agent interactions among AI systems 2.88 3.16 0.29
Shown: Average, where: 1 = Not concerned, 2 = Slightly concerned, 3 = Somewhat concerned, 4 = Concerned, 5 = Very concerned.
Note: Excludes “Don’t know” / “No opinion” and blank responses. Source: OSET AI Risk Pulse Check, 13-25 May 2024.

71% concerned / very concerned about AI harms in the next 18 months


Relatively small differences in concern by age of respondent.

“What is your current overall level of concern that BY AGE


harms (existing or new) resulting from AI will become
substantially more serious and/or widespread in the 5 Very concerned 3 Somewhat concerned 1 Not concerned
next 18 months?” (n = 348)
4 Concerned 2 Slightly concerned

Total average:
4.07 / 5 4.03 / 5
3.93 / 5 3.89 / 5 4.05 / 5

41% 45% 47%


36% 43%

30% 31% 32% 27% 28%


Not shown due to
small sample

15% 13% 19% 13%


22%
8% 6%
11% 2% 6%
11% 8%
4% 0%
2%

Under 30 30-39 40-49 50-59 60-69 70+


Note: Excludes “Don’t know” / “No opinion” and blank responses.
Source: OSET AI Risk Pulse Check, 13-25 May 2024.

90 Governing AI for Humanity


Respondents reporting technical expertise (training / developing AI) less
concerned about most example areas
“Please rate your current level of concern that harms (existing or BY EXPERTISE
new) resulting from AI will become substantially more serious
and/or widespread in the next 18 months for each area.” (n = 348) Reports technical expertise training / developing AI (n = 127)

0.25
Does not report technical expertise training / developing AI (n = 221)

0.11
0.09

0.09
0.08

0.07

0.06
0.05
0.04

0.04
0.03

0.03

0.03
0.02
0.00
0.00
0.00
-0.02

-0.02
-0.03

-0.05
-0.06

-0.06
-0.09

-0.11
-0.12
-0.13

-0.16

-0.16

g. Harms to labour -0.19


-0.25

n. Environmental
malicious use of AI

intellectual

autonomous weapons]
j. Damage to

ownership over

by non-state actors

among AI systems
l. Discrimination /

d. Intentional use of AI

autonomous actions
a. Intentional

that harms individuals

by AI systems [Excl.
AI in armed conflict

/ users

from adoption of AI
differential control and

particularly against

AI by state actors

violations

i. Violation of

harms
b. Intentional use of

by corporate actors
c. Intentional use of

/ analysis provided by
AI in critical fields

property rights
by state actors

m. Human rights

that harms customers


AI technologies

marginalized communities

f. Unintended
multi-agent interactions
information integrity

e. Unintended
k. Inaccurate information
disenfranchisement,
h. Inequalities arising from

Shown: Difference between aggregate (all respondents) rating and indicated group’s rating where: 1 = Not concerned, 2 = Slightly concerned, 3 = Somewhat concerned, 4 = Concerned,
5 = Very concerned. Note: Excludes “Don’t know” / “No opinion” and blank responses. Source: OSET AI Risk Pulse Check, 13-25 May 2024.

Limited impact from technical expertise (training / developing AI)


Men are less concerned than women regardless of reporting status.

“What is your current overall level of concern that BY GENDER & EXPERTISE
harms (existing or new) resulting from AI will become
substantially more serious and/or widespread in the 5 Very concerned 3 Somewhat concerned 1 Not concerned
next 18 months?” (n = 348) 4 Concerned 2 Slightly concerned
1 2 3 4 5
Reports technical
expertise training
Total 3% 11% 14% 30% 41% 3.95
/ developing AI
average: (n = 127)
3.98 / 5
Doesn’t report
1% 8% 20% 31% 40% 4.00
(n = 221)
40%
Men, Reports
5% 10% 17% 28% 40% 3.89
(n = 83)

31% Women, Reports


0% 14% 9% 34% 43% 4.07
(n = 44)

18% Men, Doesn’t


report 3% 8% 24% 33% 31% 3.82
9%
2% (n = 108)

Women, Doesn’t report


0% 8% 15% 27% 49% 4.17
(n = 111)
Note: Excludes “Don’t know” / “No opinion” and blank responses.
Source: OSET AI Risk Pulse Check, 13-25 May 2024.

Final Report 91
Change in perception of level of concern in the past three months
regarding risks of AI harms

50% of the respondents increased concern in the past three months;


48% remained the same
Almost nobody decreased; more women, non-WEOG respondents have increased level of concern.

Excludes “Don’t know” / “No opinion” and blank responses.


OSET AI Risk Pulse Check, 13-25 May 2024.

92 Governing AI for Humanity


Annex F: Opportunity scan responses
On the request of the High-level Advisory Body on Artificial Intelligence, the Office
of the Secretary-General’s Envoy on Technology (OSET) conducted a global AI
opportunity scan survey. Experts were asked to respond with their views in their
personal capacity (not on behalf of their institution or employer). The survey was
divided into sections covering opportunities in high/upper-middle-income countries
and lower-middle/lower-income countries, with only respondents reporting specific
knowledge about lower-middle/lower-income country contexts answering those
questions. The survey asked only about possible positive implications of AI.

Respondents were asked to what extent they were aware of specific examples to date
of AI increasing economic activity, accelerating scientific discoveries and contributing
to progress on individual SDGs.1 They were asked to provide details including case
studies, names of organizations, data and links to relevant articles/publications/
papers. Respondents were then asked how much progress they expected in the next
three years along the same dimensions.

As an additional view, respondents were asked by when they expected major


impact from AI along those dimensions, with 50% confidence/likelihood. Additional
questions were asked including which actors were involved in capturing certain
opportunities, what barriers contributed to the AI divide between countries, and
whether specific groups faced additional limitations harnessing opportunities from AI
and how these could be addressed.

The survey was fielded from 9 to 21 August 2024, with the invitee list constructed
from OSET and the Advisory Body’s networks, including participants in Advisory
Body deep dives. Additionally, both the International Telecommunication Union’s AI
for Good meeting and the networks of the United Nations Conference on Trade and
Development were generously used to field the survey. Over 1,000 individuals were
invited overall. More than 120 respondents replied to the survey, providing a rich and
diverse perspective (including across regions and gender) on opportunities from AI.

1 SDG 8 (Decent work and economic growth) and SDG 9 (Innovation, industry and infrastructure) were not asked about separately, given their close link to
increasing economic activity. SDG 17 (Partnerships for the Goals) was also not asked about specifically.

Final Report 93
Overview of sample

Regional representation: strong global participation


Allows comparison of responses between Western European and Others Group (WEOG) and other regions.

Respondents by region of nationality* (n = 121)


The Western European
and Others Group (WEOG)
includes western Europe
plus Australia, Canada,
WEOG 36 22 58 (48%) Israel, New Zealand,
Türkiye and the United
States of America

Asia-Pacific 15 12 27 (22%)
United States 23 (19%)

Africa 13 10 23 (19%)
Germany 8 (7%)
38
countries
represented
Latin America and India 8 (7%)
7 1 8 (7%)
the Caribbean Men
Women
United Kingdom 8 (7%)

Eastern Europe 3 2 5 (4%) Canada 7 (6%)


China South
6 (5%) Africa 7 (6%)
* 9 respondents (7%) indicated multiple nationalities. If respondents were resident in one of their countries of nationality, that nationality was used for analysis (8 of 9). Otherwise, the
least represented nationality was used (1 of 9).
Source: OSET AI Opportunity Scan survey, 9-21 August 2024.

Men are ~60% of both WEOG, non-WEOG samples


Consistency means univariate analysis by gender, region is not immediately contaminated.

Respondents by region of nationality* (n = 121) BY REGION & GENDER

63
58

38
36 (60%)
Men
(62%)

25
22
Women (40%)
(38%)

WEOG nationality Non-WEOG

* 9 respondents (7%) indicated multiple nationalities. If respondents were resident in one of their countries of nationality, that nationality was used for analysis (8 of 9). Otherwise, the
least represented nationality was used (1 of 9).
Source: OSET AI Opportunity Scan survey, 9-21 August 2024.

94 Governing AI for Humanity


Developing-country-knowledgeable sample less balanced

Respondents reporting specific knowledge about BY REGION & GENDER


lower-middle/lower-income-country-contexts,
by region of nationality* (n = 78)
51

28
(55%)

27

19
Men
(70%)
23
(45%)
8
Women
(30%)
WEOG nationality Non-WEOG

* 9 respondents (7%) indicated multiple nationalities. If respondents were resident in one of their countries of nationality, that nationality was used for analysis (8 of 9). Otherwise, the
least represented nationality was used (1 of 9). Only respondents reporting relevant knowledge were asked about lower-middle/lower-income countries.
Source: OSET AI Opportunity Scan survey, 9-21 August 2024.

Perceptions regarding positive impact of AI to date

Positive impact to date on growth and science, but less on most SDGs
Impact to date in high/upper-middle-income countries.

“To what degree are you aware of 1 Don’t believe AI is causing any positive impact 4 Aware of AI causing major positive impact
specific examples of AI currently 2 Aware of AI causing minor positive impact 5 Aware of AI causing transformative positive impact
or having recently directly
contributed to … in high/upper- 3 Aware of AI causing positive impact
middle-income countries?” 1 2 3 4 5

Increasing economic activity 4% 26% 47% 10% 12% n = 115 3.00


Accelerating scientific discoveries 5% 15% 40% 25% 15% 107 3.31
SDG 3 - Good health and well-being 19% 26% 35% 8% 12% 95 2.67
SDG 4 - Quality education 26% 27% 31% 10% 5% 96 2.42
SDG 11 - Sustainable cities and communities 38% 18% 34% 8% 3% 77 2.19
SDG 7 - Affordable and clean energy 40% 21% 27% 9% 3% 77 2.13
SDG 13 - Climate action 48% 18% 20% 10% 5% 80 2.08
SDG 15 - Life on land 48% 21% 21% 10% 62 1.92
SDG 6 - Clean water and sanitation 55% 17% 17% 8% 3% 64 1.88
SDG 2 - Zero hunger 55% 23% 13% 5% 4% 75 1.81
SDG 14 - Life below water 57% 19% 13% 11% 54 1.78
SDG 1 - No poverty 59% 21% 12% 5% 4% 78 1.74
SDG 12 - Responsible consumption & production 64% 13% 16% 7% 70 1.66
3%
SDG 16 - Peace, justice and strong institutions 65% 19% 9% 4% 75 1.60
1%
SDG 5 - Gender equality 73% 10% 13% 3% 78 1.49
SDG 10 - Reduced inequalities 77% 7% 12% 4% 82 1.43

Note: Excludes “Don’t know” / “No opinion” and blank responses. Did not ask about SDGs 8, 9 and 17.
Source: OSET AI Opportunity Scan survey, 9-21 August 2024.

Final Report 95
Less impact reported in the lower-income world on all fronts
Impact to date in lower-middle/lower-income countries.

“To what degree are you aware of 1 Don’t believe AI is causing any positive impact 4 Aware of AI causing major positive impact
specific examples of AI currently 2 Aware of AI causing minor positive impact 5 Aware of AI causing transformative positive impact
or having recently directly
3 Aware of AI causing positive impact
contributed to … in lower-
middle/lower-income countries?” 1 2 3 4 5

Increasing economic activity 31% 36% 24% 6% 4% n = 72 2.17


Accelerating scientific discoveries 37% 32% 22% 7% 3% 60 2.08
3%
SDG 3 - Good health and well-being 34% 27% 32% 3% 59 2.15
SDG 4 - Quality education 38% 28% 24% 9% 2% 58 2.09
SDG 2 - Zero hunger 50% 24% 20% 6% 54 1.81
2%
SDG 13 - Climate action 52% 21% 23% 2% 52 1.81
SDG 11 - Sustainable cities and communities 61% 9% 22% 9% 46 1.78
SDG 6 - Clean water and sanitation 54% 22% 22% 2% 50 1.72
SDG 7 - Affordable and clean energy 60% 16% 22% 2% 45 1.67
SDG 1 - No poverty 62% 24% 11% 4% 55 1.56
SDG 15 - Life on land 67% 12% 21% 42 1.55
SDG 12 - Responsible consumption & production 73% 13% 13% 2% 48 1.44
SDG 16 - Peace, justice and strong institutions 72% 15% 11% 2% 54 1.43
SDG 5 - Gender equality 75% 10% 13% 2% 52 1.42
SDG 10 - Reduced inequalities 77% 12% 10% 2% 52 1.37
SDG 14 - Life below water 78% 11% 11% 36 1.33
Note: Excludes “Don’t know” / “No opinion” and blank responses. Only respondents reporting relevant knowledge were asked about lower-middle/lower-income countries.
Did not ask about SDGs 8, 9 and 17.
Source: OSET AI Opportunity Scan survey, 9-21 August 2024.

Less impact reported in the lower-income world on all fronts


Gap most pronounced on economic growth and science.

Average rating for “To what degree are you aware of specific
examples of AI currently or having recently directly contributed to … ?”
by country income group, where:
1 = Don’t believe AI is causing any positive impact
5.0 2 = Aware of AI causing minor positive impact
3 = Aware of AI causing positive impact
4.5 4 = Aware of AI causing major positive impact
5 = Aware of AI causing transformative positive impact High/upper-middle-income countries
4.0 Lower-middle/lower-income countries
3.5 3.31
3.00
3.0 2.67
2.42
2.5 2.19 2.13
2.08
1.88 1.92
1.81
2.0 2.17 2.15
1.74 1.78
1.66
2.08 2.09 1.60
1.49 1.43
1.5 1.78 1.81
1.67
1.81
1.72
1.55 1.56
1.33 1.44 1.43 1.42 1.37
1.0
Accelerating scientific
discoveries

SDG 6 - Clean water


SDG 7 - Affordable and

SDG 1 - No poverty
SDG 3 - Good health

SDG 11 - Sustainable
Increasing economic
activity

SDG 4 -

clean energy

and sanitation

production
Quality education

SDG 15 - Life on land

SDG 10 - Reduced
cities and communities

SDG 2 - Zero hunger

SDG 14 - Life
below water

inequalities
and well-being

SDG 5 - Gender equality


SDG 13 - Climate action

SDG 12 - Responsible

and strong institutions


consumption &

SDG 16 - Peace, justice

Note: Excludes “Don’t know” / “No opinion” and blank responses. Only respondents reporting relevant knowledge were asked about lower-middle/lower-income countries.
Did not ask about SDGs 8, 9 and 17. Source: OSET AI Opportunity Scan survey, 9-21 August 2024.

96 Governing AI for Humanity


Perceptions regarding expected positive impact of AI in the next
three years

Expected impact on growth, science, health, education – less on others


Impact expected in the next three years in high/upper-middle-income countries

“In the next three years, how much 1 Don’t expect any positive impact 4 Expect major positive impact
do you expect AI to directly 2 Expect minor positive impact 5 Expect transformative positive impact
contribute towards … in
3 Expect positive impact
high/upper-middle-income
countries?” 1 2 3 4 5

Increasing economic activity 4% 29% 41% 22% 5% n = 111 2.96


Accelerating scientific discoveries 2% 21% 40% 26% 12% 112 3.25
SDG 3 - Good health and well-being 11% 34% 34% 11% 10% 89 2.75
SDG 4 - Quality education 17% 28% 30% 17% 92 2.67
SDG 11 - Sustainable cities and communities 31% 32% 25% 10% 3% 72 2.22
SDG 7 - Affordable and clean energy 35% 24% 32% 8% 1% 72 2.18
SDG 13 - Climate action 38% 27% 21% 10% 4% 77 2.16
SDG 6 - Clean water and sanitation 38% 27% 25% 8% 1% 71 2.08
SDG 15 - Life on land 39% 34% 18% 7% 2% 61 1.97
SDG 14 - Life below water 44% 29% 15% 10% 2% 59 1.97
SDG 2 - Zero hunger 44% 29% 23% 5% 80 1.89
SDG 16 - Peace, justice and strong institutions 47% 29% 16% 7% 1% 75 1.87
1%
SDG 12 - Responsible consumption & production 53% 22% 18% 6% 72 1.81
SDG 1 - No poverty 44% 38% 14% 5% 85 1.80
SDG 5 - Gender equality 61% 22% 13% 4% 72 1.60
1%
SDG 10 - Reduced inequalities 65% 22% 9% 3% 78 1.53

Note: Excludes “Don’t know” / “No opinion” and blank responses. Did not ask about SDGs 8, 9 and 17.
Source: OSET AI Opportunity Scan survey, 9-21 August 2024

Some expected impact in lower-income world, but again more limited


Impact expected in the next three years in lower-middle/lower-income countries.

“In the next three years, how much 1 Don’t expect any positive impact 4 Expect major positive impact
do you expect AI to directly 2 Expect minor positive impact 5 Expect transformative positive impact
contribute towards … in lower-
3 Expect positive impact
middle/lower-income countries?”
1 2 3 4 5

Increasing economic activity 19% 33% 37% 9% 1% n = 67 2.40


Accelerating scientific discoveries 20% 42% 31% 5% 3% 65 2.29
SDG 4 - Quality education 33% 33% 23% 11% 57 2.11
SDG 3 - Good health and well-being 29% 36% 31% 3% 58 2.09
SDG 7 - Affordable and clean energy 42% 28% 26% 4% 50 1.92
SDG 13 - Climate action 42% 29% 25% 4% 55 1.91
SDG 11 - Sustainable cities and communities 45% 30% 23% 2% 53 1.81
2%
SDG 15 - Life on land 52% 29% 14% 2% 42 1.74
SDG 6 - Clean water and sanitation 54% 25% 15% 6% 52 1.73
SDG 2 - Zero hunger 50% 33% 17% 54 1.67
SDG 14 - Life below water 60% 23% 15% 3% 40 1.60
SDG 1 - No poverty 52% 38% 11% 56 1.59
SDG 12 - Responsible consumption & production 63% 21% 17% 48 1.54
SDG 16 - Peace, justice and strong institutions 64% 24% 10% 2% 50 1.50
SDG 10 - Reduced inequalities 67% 20% 11% 2% 55 1.47
SDG 5 - Gender equality 74% 11% 13% 2% 53 1.43
Note: Excludes “Don’t know” / “No opinion” and blank responses. Only respondents reporting relevant knowledge were asked about lower-middle/lower-income countries.
Did not ask about SDGs 8, 9 and 17.
Source: OSET AI Opportunity Scan survey, 9-21 August 2024.

Final Report 97
Less impact expected in the lower-income world on all fronts
Gap most pronounced on economic growth, science, health and education.

Average rating for “In the next three years, how much do you expect AI
to directly contribute towards … ?” by country income group, where:
1 = Don’t expect any positive impact
2 = Expect minor positive impact
5.0 3 = Expect positive impact
4 = Expect major positive impact
4.5 5 = Expect transformative positive impact
High/upper-middle-income countries
4.0 Lower-middle/lower-income countries
3.5 3.25
2.96
3.0 2.75
2.67

2.5 2.18 2.16 2.22


2.08
1.97 1.97
2.40 1.89 1.87
2.0 2.29 1.80 1.81
2.09 2.11 1.60 1.53
1.92 1.91
1.5 1.81
1.73 1.74 1.67 1.59
1.60 1.54
1.50 1.43 1.47
1.0
Accelerating scientific
discoveries

SDG 6 - Clean water


SDG 7 - Affordable and
SDG 3 - Good health
Increasing economic

clean energy

SDG 1 - No poverty
SDG 11 - Sustainable

and sanitation
activity

SDG 4 -

SDG 15 - Life on land

production
Quality education

SDG 14 - Life

SDG 10 - Reduced
below water
cities and communities

inequalities
SDG 2 - Zero hunger
and well-being

SDG 5 - Gender equality


SDG 13 - Climate action

and strong institutions

SDG 12 - Responsible
consumption &
SDG 16 - Peace, justice
Note: Excludes “Don’t know” / “No opinion” and blank responses. Only respondents reporting relevant knowledge were asked about lower-middle/lower-income countries.
Did not ask about SDGs 8, 9 and 17. Source: OSET AI Opportunity Scan survey, 9-21 August 2024.
Charts prepared with think-cell

98 Governing AI for Humanity


Annex G: List of abbreviations
ACM Association for Computing Machinery
AG African Group
AI artificial intelligence
ANSI American National Standards Institute
APG Asia and the Pacific Group
ASEAN Association of Southeast Asian Nations
BSI British Standards Institution
CEN European Committee for Standardisation
CENELEC European Committee for Electrotechnical Standardization
CERN European Organization for Nuclear Research
EEG Eastern European Group
ETSI European Telecommunications Standards Institute
FAO Food and Agriculture Organization of the United Nations
FATF Financial Action Task Force
FIMI foreign information manipulation and interference
FMF Frontier Model Forum
FSB Financial Stability Board
G20 Group of 20
G7 Group of Seven
GPU graphics processing unit
IAEA International Atomic Energy Agency
ICAO International Civil Aviation Organization
IEC International Electrotechnical Commission
IEEE Institute of Electrical and Electronics Engineers
ILO International Labour Organization
IMO International Maritime Organization
IPCC Intergovernmental Panel on Climate Change
ISO International Organization for Standardization
ITU International Telecommunication Union
LAC Latin America and the Caribbean
NIST National Institute of Standards and Technology (United States)
OECD Organisation for Economic Co-operation and Development
OHCHR Office of the United Nations High Commissioner for Human Rights
OSET Office of the Secretary-General’s Envoy on Technology
SAC Standardization Administration of China
SDG Sustainable Development Goal
UNCTAD United Nations Conference on Trade and Development
UNDP United Nations Development Programme
UNESCO United Nations Educational, Scientific and Cultural Organization
UNHCR United Nations High Commissioner for Refugees
UNOCT Office of Counter-Terrorism
WEOG Western European and Others Group
WHO World Health Organization
WIPO World Intellectual Property Organization
WSC World Standards Cooperation

Final Report 99
Donors

The Body gratefully acknowledges the financial and in-kind contributions of the following governments
and partners, without whom it would not have been able to carry out its responsibilities:

Government of the Czech Republic


European Union
Government of Finland
Government of Germany
Government of Italy
Government of Japan
Government of the Kingdom of the Netherlands
Government of the Kingdom of Saudi Arabia
Government of Singapore
Government of Switzerland
Government of the United Arab Emirates
Government of the United Kingdom of Great Britain and Northern Ireland
Omidyar Network Fund
L’Organisation internationale de la Francophonie

100 Governing AI for Humanity


102 Governing AI for Humanity

You might also like