What Is AI Governance_ _ IBM

Download as pdf or txt
Download as pdf or txt
You are on page 1of 11

Home / Topics / AI Governance

What is AI governance?

Explore IBM's AI governance solution

Sign up for AI updates

What is AI governance?
Why is AI governance important?

Examples of AI governance

Who oversees responsible AI governance?

Principles of responsible AI governance

Levels of AI governance

How organizations are deploying AI governance

What regulations require AI governance?

Products

Resources

Take the next step

Published: 28 November 2023


Contributors: Tim Mucci, Cole Stryker

What is AI governance?
Artificial intelligence (AI) governance refers to the guardrails that
ensure AI tools and systems are and remain safe and ethical.
It establishes the frameworks, rules and standards that direct AI
research, development and application to ensure safety, fairness and
respect for human rights.

AI governance encompasses oversight mechanisms that address risks like bias, privacy
infringement and misuse while fostering innovation and trust. An ethical AI-centered
approach to AI governance requires the involvement of a wide range of stakeholders,
including AI developers, users, policymakers and ethicists, ensuring that AI-related
systems are developed and used to align with society's values.

AI governance addresses the inherent flaws arising from the human element in AI
creation and maintenance. Since AI is a product of highly engineered code and machine
learning created by people, it is susceptible to human biases and errors. Governance
provides a structured approach to mitigate these risks, ensuring that machine learning
algorithms are monitored, evaluated and updated to prevent flawed or harmful
decisions.
AI solutions must be developed and used responsibly and ethically. That means
addressing the risks associated with AI: bias, discrimination and harm to individuals.
Governance addresses these risks through sound AI policy, regulation, data governance
and well-trained and maintained data sets.

Governance aims to establish the necessary oversight to align AI behaviors with ethical
standards and societal expectations and to safeguard against potential adverse impacts.

White paper

Why AI governance is a business imperative for scaling enterprise AI

Learn about barriers to AI adoptions, particularly lack of AI governance and risk


management solutions.

Related content

Register for the IDC report

Why is AI governance important?


AI governance is essential for reaching a state of compliance, trust and efficiency in
developing and applying AI technologies. With AI's increasing integration into
organizational and governmental operations, its potential for negative impact has
become more visible. High-profile missteps like the Tay chatbot incident (link resides
outside ibm.com), where a Microsoft AI chatbot learned toxic behavior from public
interactions on social media and the COMPAS (link resides outside ibm.com) software's
biased sentencing decisions have highlighted the need for sound governance to prevent
harm and maintain public trust.
These instances show that AI can cause significant social and ethical harm without
proper oversight, emphasizing the importance of governance in managing the risks
associated with advanced AI. By providing guidelines and frameworks, AI governance
aims to balance technological innovation with safety, ensuring AI systems do not violate
human dignity or rights.

Transparent decision-making and explainability are critical for ensuring AI systems are
used responsibly. AI systems make decisions all the time, from deciding which ads to
show to determining whether to approve a loan. It is essential to understand how AI
systems make decisions to hold them accountable for their decisions and ensure that
they make them fairly and ethically.

Moreover, AI governance is not just about ensuring one-time compliance; it's also about
sustaining ethical standards over time. AI models can drift, leading to output quality and
reliability changes. Current trends in governance are moving beyond mere legal
compliance towards ensuring AI's social responsibility, thereby safeguarding against
financial, legal and reputational damage, while promoting the responsible growth of
technology.

Examples of AI governance
Examples of AI governance encompass a range of policies, frameworks and practices
that organizations and governments implement to ensure the responsible use of AI
technologies. These examples demonstrate how AI governance happens in different
contexts:

The General Data Protection Regulation (GDPR): An example of AI governance,


particularly in the context of personal data protection and privacy. While the GDPR is not
exclusively focused on AI, many of its provisions are highly relevant to AI systems,
especially those that process the personal data of individuals within the European Union.

The Organisation for Economic Co-operation and Development (OECD): AI


Principles, adopted by over 40 countries, emphasize responsible stewardship of
trustworthy AI, including transparency, fairness and accountability in AI systems.

Corporate AI Ethics Boards: Many companies have established ethics boards or


committees to oversee AI initiatives, ensuring they align with ethical standards and
societal values. For example, IBM has launched an AI Ethics Council to review new AI
products and services and ensure they align with IBM's AI principles. These boards often
include cross-functional teams from legal, technical and policy backgrounds.
Who oversees responsible AI
governance?
In an enterprise-level organization, the CEO and senior leadership are ultimately
responsible for ensuring their organization applies sound AI governance throughout the
AI lifecycle. Legal and general counsel are critical in assessing and mitigating legal risks,
ensuring AI applications comply with relevant laws and regulations. Audit teams are
essential for validating the data integrity of AI systems and confirming that the systems
operate as intended without introducing errors or biases. The CFO oversees the financial
implications, managing the costs associated with AI initiatives and mitigating any
financial risks.

However, the responsibility for AI governance does not rest with a single individual or
department; it is a collective responsibility where every leader must prioritize
accountability and ensure that AI systems are used responsibly and ethically across the
organization. The CEO and senior leadership are responsible for setting the overall tone
and culture of the organization. When prioritizing accountable AI governance, it sends all
employees a clear message that everyone must use AI responsibly and ethically. The
CEO and senior leadership can also invest in employee AI governance training, actively
develop internal policies and procedures and create a culture of open communication
and collaboration.

Principles of responsible AI
governance
AI governance is essential for managing rapid advancements in AI technology,
particularly with the emergence of generative AI. Generative AI, which includes
technologies capable of creating new content and solutions, such as text, images and
code, has vast potential across many use cases. From enhancing creative processes in
design and media to automating tasks in software development, generative AI is
transforming how industries operate. However, with its broad applicability comes the
need for robust AI governance.

The principles of responsible AI governance are essential for organizations to safeguard


themselves and their customers. The following principles can guide organizations in the
ethical development and application of AI technologies, which include:

– Empathy: Organizations should understand the societal implications of AI, not just
the technological and financial aspects. They need to anticipate and address the
impact of AI on all stakeholders.
– Bias control: It is essential to rigorously examine training data to prevent embedding
real-world biases into AI algorithms, ensuring fair and unbiased decisions.
– Transparency: There must be clarity and openness in how AI algorithms operate and
make decisions, with organizations ready to explain the logic and reasoning behind
AI-driven outcomes.
– Accountability: Organizations should proactively set and adhere to high standards to
manage the significant changes AI can bring, maintaining responsibility for AI's
impacts.

In late 2023, The White House issued an executive order to ensure AI safety and
security. This comprehensive strategy provides a framework for establishing new
standards to manage the risks inherent in AI technology. The U.S. government's new AI
safety and security standards exemplify how governments approach this highly sensitive
issue.

– AI safety and security: Mandates developers of powerful AI systems share safety test
results and critical information with the U.S. government. Requires the development
of standards, tools and tests to ensure AI systems are safe and trustworthy​.
– Privacy protection: Prioritizes developing and using privacy-preserving techniques
and strengthens privacy-preserving research and technologies. It also sets guidelines
for federal agencies to evaluate the effectiveness of privacy-preserving techniques.
– Equity and civil rights: Prevents AI from exacerbating discrimination and biases in
various sectors. Includes guiding landlords and federal programs, addresses
algorithmic discrimination and ensures fairness in the criminal justice system​.
– Consumer, patient and student protection: Helps advance responsible AI in
healthcare and education, such as developing life-saving drugs and supporting AI-
enabled educational tools​.
– Worker support: Develops principles to mitigate AI's harmful effects on jobs and
workplaces, including addressing job displacement and workplace equity​.
– Promoting innovation and competition: Catalyzes AI research across the U.S.,
encourages a fair and competitive AI ecosystem and facilitates the entry of skilled AI
professionals into the U.S.​
– Global leadership in AI: Expands international collaboration on AI and promotes the
development and implementation of vital AI standards with international partners​.
– Government use of AI: Ensures responsible government deployment of AI by issuing
guidance for agencies' use of AI, improving AI procurement and accelerating hiring of
AI professionals.

– While regulations and market forces standardize many governance metrics,


organizations must still determine how to best balance measures for their business.
Measuring AI governance effectiveness can vary by organization; each organization
must decide what focus areas they must prioritize. With focus areas such as data
quality, model security, cost-value analysis, bias monitoring, individual accountability,
continuous auditing and adaptability to adjust depending on the organization's
domain, it is not a one-size-fits-all decision.

Levels of AI governance

AI governance doesn't have universally standardized "levels" in the way that, for
example, cybersecurity might have defined levels of threat response. Instead, AI
governance has structured approaches and frameworks developed by various entities
that organizations can adopt or adapt to their specific needs.

Organizations can use several frameworks and guidelines to develop their governance
practices. Some of the most widely used frameworks include the NIST AI Risk
Management Framework, the OECD Principles on Artificial Intelligence, and the European
Commission's Ethics Guidelines for Trustworthy AI. These frameworks provide guidance
for a range of topics, including transparency, accountability, fairness, privacy, security
and safety.

The levels of governance can vary depending on the organization's size, the complexity of
the AI systems in use and the regulatory environment in which the organization operates.

An overview of these approaches:


Informal governance
This is the least intensive approach to governance based on the values and
principles of the organization. There may be some informal processes, such as
ethical review boards or internal committees, but there is no formal structure or
framework for AI governance.


Ad hoc governance
This is a step up from informal governance and involves the development of
specific policies and procedures for AI development and use. This type of
governance is often developed in response to specific challenges or risks and
may not be comprehensive or systematic.


Formal governance
This is the highest level of governance and involves the development of a
comprehensive AI governance framework. This framework reflects the
organization's values and principles and aligns with relevant laws and
regulations. Formal governance frameworks typically include risk assessment,
ethical review and oversight processes.

How organizations are deploying AI


governance
The concept of AI governance becomes increasingly vital as automation, driven by AI,
becomes prevalent in sectors ranging from healthcare and finance to transportation and
public services. The automation capabilities of AI can significantly enhance efficiency,
decision-making and innovation, but they also introduce challenges related to
accountability, transparency and ethical considerations.
The governance of AI involves establishing robust control structures containing policies,
guidelines and frameworks to address these challenges. It involves setting up
mechanisms to continuously monitor and evaluate AI systems, ensuring they comply
with established ethical norms and legal regulations.

Effective governance structures in AI are multidisciplinary, involving stakeholders from


various fields, including technology, law, ethics and business. As AI systems become
more sophisticated and integrated into critical aspects of society, the role of AI
governance in guiding and shaping the trajectory of AI development and its societal
impact becomes ever more crucial.

AI governance best practices involve an approach beyond mere compliance to


encompass a more robust system for monitoring and managing AI applications. For
enterprise-level businesses, the AI governance solution should enable broad oversight
and control over AI systems. Here is a sample roadmap to consider:

1. Visual dashboard: Utilize a dashboard that provides real-time updates on the health
and status of AI systems, offering a clear overview for quick assessments.
2. Health score metrics: Implement an overall health score for AI models using intuitive
and easy-to-understand metrics to simplify monitoring.
3. Automated monitoring: Employ automatic detection systems for bias, drift,
performance and anomalies to ensure models function correctly and ethically.
4. Performance alerts: Set up alerts for when a model deviates from its predefined
performance parameters, enabling timely interventions.
5. Custom metrics: Define custom metrics that align with the organization's key
performance indicators (KPIs) and thresholds to ensure AI outcomes contribute to
business objectives.
6. Audit trails: Maintain easily accessible logs and audit trails for accountability and to
facilitate reviews of AI systems' decisions and behaviors.
7. Open-source tools compatibility: Choose open-source tools compatible with various
machine learning development platforms to benefit from the flexibility and
community support.
8. Seamless integration: Ensure that the AI governance platform integrates seamlessly
with the existing infrastructure, including databases and software ecosystems, to
avoid silos and enable efficient workflows.

By adhering to these practices, organizations can establish a robust AI governance


framework that supports responsible AI development, deployment and management,
ensuring that AI systems are compliant and aligned with ethical standards and
organizational goals.

What regulations require AI


governance?
AI governance practices and regulations have been adopted by a number of countries to
prevent bias and discrimination. Below are just a few examples. It's important to
remember that regulation is always in flux, and organizations who manage complex AI
systems will need to keep a close eye as regional frameworks evolve.

The U.S.'s SR-11-7

SR-11-71 is the U.S. regulatory model governance standard for effective and strong
model governance in banking. The regulation requires bank officials to apply company-
wide model risk management initiatives and maintain an inventory of models
implemented for use, under development for implementation, or recently retired.
Leaders of the institutions also must prove their models are achieving the business
purpose they were intended to solve, and that they are up-to-date and have not drifted.
Model development and validation must enable anyone unfamiliar with a model to
understand the model’s operations, limitations and key assumptions.

Canada's Directive on Automated Decision-Making

Canada’s Directive on Automated Decision-Making2 describes how that country’s


government uses AI to guide decisions in several departments. The directive uses a
scoring system to assess the human intervention, peer review, monitoring and
contingency planning needed for an AI tool built to serve citizens. Organizations creating
AI solutions with a high score must conduct two independent peer reviews, offer public
notice in plain language, develop a human intervention failsafe and establish recurring
training courses for the system. As Canada’s Directive on Automated Decision-Making is a
guidance for the country’s own development of AI, the regulation doesn’t directly affect
companies the way SR 11-7 does in the U.S.

Europe’s evolving AI regulations

In April 2021, the European Commission presented its AI package3, including statements
on fostering a European approach to excellence, trust and a proposal for a legal
framework on AI. The statements declare that while most AI systems will fall into the
category of "minimal risk," AI systems identified as "high risk" will be required to adhere
to stricter requirements, and systems deemed "unacceptable risk" will be banned.
Organizations must pay close attention to these rules or risk fines.
AI governance guidelines in the Asia-Pacific region

In the Asia-Pacific region, countries have released several principles and guidelines for
governing AI. In 2019, Singapore’s federal government released a framework with
guidelines for addressing issues of AI ethics in the private sector. India’s AI strategy
framework recommends setting up a center for studying how to address issues related to
AI ethics, privacy and more. China, Japan, South Korea, Australia and New Zealand are
also exploring guidelines for AI governance.4

Related solutions

watsonx

Easily deploy and embed AI across your business, manage all data sources, and
accelerate responsible AI workflows—all on one platform.

Explore watsonx

IBM OpenPages

Simplify data governance, risk management and regulatory compliance with IBM
OpenPages — a highly scalable, AI-powered, and unified GRC platform.

Explore OpenPages

Related resources

Article Blog Thought Article Article Article


leadershi
Build 3 key Is
p What What What
respo reason your is AI is is data
nsible s why AI ethics explai govern
AI your trustw ? nable ance?
workfl organi orthy? AI?
ow
Our zation
Explore Join the AI Explaina Learn
latest the
needs discussi ethics is ble AI is how
eBook transfor on on a crucial data
Respo
outlines mative why multidis for an governa
nsible
the key potentia busines ciplinary organiza nce
building AI
l of ses field tion in ensures
blocks Respons need to that building compani
of AI ible AI prioritiz studies trust es get
governa for your e AI how to and the
nce and organiza governa optimize confide most
shares a tion and nce to AI’s nce from
detailed learn deploy benefici when their
AI about responsi al putting data

Take the next step


governa
nce
the
crucial
ble AI. impact
while
AI
models
assets.

framew drivers reducin into


ork that responsible,
Accelerate behind g risks
transparent and explainable AI workflows producti
across the lifecycle
you
for cangenerative and
both adoptin and and monitor your
machine learning models. Direct, manage, on.
apply in g ethical adverse
organization’s AI activities to better manage growing AI regulations and detect and
your risk.
mitigate AI outcom
organiza practice es.
tion. s. Learn
Explore watsonx.governance
about
IBM’s
Book a live demo approac
h to AI
ethics.

Footnotes
1. “SR 11-7: Guidance on Model Risk Management.” (link resides outside ibm.com), Board of Governors of the Federal Reserve System
Washington, D.C., Division of Banking Supervision and Regulation, 4 April 2011.

2. “Canada's New Federal Directive Makes Ethical AI a National Issue.” Digital, 8 March 2019.

3. "A European approach to artificial intelligence." (link resides outside ibm.com), European Commission, 14 December 2023.

4. “Wrestling With AI Governance Around The World.” (link resides outside ibm.com), Forbes, 27 March 2019.

You might also like