What Is AI Governance_ _ IBM
What Is AI Governance_ _ IBM
What Is AI Governance_ _ IBM
What is AI governance?
What is AI governance?
Why is AI governance important?
Examples of AI governance
Levels of AI governance
Products
Resources
What is AI governance?
Artificial intelligence (AI) governance refers to the guardrails that
ensure AI tools and systems are and remain safe and ethical.
It establishes the frameworks, rules and standards that direct AI
research, development and application to ensure safety, fairness and
respect for human rights.
AI governance encompasses oversight mechanisms that address risks like bias, privacy
infringement and misuse while fostering innovation and trust. An ethical AI-centered
approach to AI governance requires the involvement of a wide range of stakeholders,
including AI developers, users, policymakers and ethicists, ensuring that AI-related
systems are developed and used to align with society's values.
AI governance addresses the inherent flaws arising from the human element in AI
creation and maintenance. Since AI is a product of highly engineered code and machine
learning created by people, it is susceptible to human biases and errors. Governance
provides a structured approach to mitigate these risks, ensuring that machine learning
algorithms are monitored, evaluated and updated to prevent flawed or harmful
decisions.
AI solutions must be developed and used responsibly and ethically. That means
addressing the risks associated with AI: bias, discrimination and harm to individuals.
Governance addresses these risks through sound AI policy, regulation, data governance
and well-trained and maintained data sets.
Governance aims to establish the necessary oversight to align AI behaviors with ethical
standards and societal expectations and to safeguard against potential adverse impacts.
White paper
Related content
Transparent decision-making and explainability are critical for ensuring AI systems are
used responsibly. AI systems make decisions all the time, from deciding which ads to
show to determining whether to approve a loan. It is essential to understand how AI
systems make decisions to hold them accountable for their decisions and ensure that
they make them fairly and ethically.
Moreover, AI governance is not just about ensuring one-time compliance; it's also about
sustaining ethical standards over time. AI models can drift, leading to output quality and
reliability changes. Current trends in governance are moving beyond mere legal
compliance towards ensuring AI's social responsibility, thereby safeguarding against
financial, legal and reputational damage, while promoting the responsible growth of
technology.
Examples of AI governance
Examples of AI governance encompass a range of policies, frameworks and practices
that organizations and governments implement to ensure the responsible use of AI
technologies. These examples demonstrate how AI governance happens in different
contexts:
However, the responsibility for AI governance does not rest with a single individual or
department; it is a collective responsibility where every leader must prioritize
accountability and ensure that AI systems are used responsibly and ethically across the
organization. The CEO and senior leadership are responsible for setting the overall tone
and culture of the organization. When prioritizing accountable AI governance, it sends all
employees a clear message that everyone must use AI responsibly and ethically. The
CEO and senior leadership can also invest in employee AI governance training, actively
develop internal policies and procedures and create a culture of open communication
and collaboration.
Principles of responsible AI
governance
AI governance is essential for managing rapid advancements in AI technology,
particularly with the emergence of generative AI. Generative AI, which includes
technologies capable of creating new content and solutions, such as text, images and
code, has vast potential across many use cases. From enhancing creative processes in
design and media to automating tasks in software development, generative AI is
transforming how industries operate. However, with its broad applicability comes the
need for robust AI governance.
– Empathy: Organizations should understand the societal implications of AI, not just
the technological and financial aspects. They need to anticipate and address the
impact of AI on all stakeholders.
– Bias control: It is essential to rigorously examine training data to prevent embedding
real-world biases into AI algorithms, ensuring fair and unbiased decisions.
– Transparency: There must be clarity and openness in how AI algorithms operate and
make decisions, with organizations ready to explain the logic and reasoning behind
AI-driven outcomes.
– Accountability: Organizations should proactively set and adhere to high standards to
manage the significant changes AI can bring, maintaining responsibility for AI's
impacts.
In late 2023, The White House issued an executive order to ensure AI safety and
security. This comprehensive strategy provides a framework for establishing new
standards to manage the risks inherent in AI technology. The U.S. government's new AI
safety and security standards exemplify how governments approach this highly sensitive
issue.
– AI safety and security: Mandates developers of powerful AI systems share safety test
results and critical information with the U.S. government. Requires the development
of standards, tools and tests to ensure AI systems are safe and trustworthy.
– Privacy protection: Prioritizes developing and using privacy-preserving techniques
and strengthens privacy-preserving research and technologies. It also sets guidelines
for federal agencies to evaluate the effectiveness of privacy-preserving techniques.
– Equity and civil rights: Prevents AI from exacerbating discrimination and biases in
various sectors. Includes guiding landlords and federal programs, addresses
algorithmic discrimination and ensures fairness in the criminal justice system.
– Consumer, patient and student protection: Helps advance responsible AI in
healthcare and education, such as developing life-saving drugs and supporting AI-
enabled educational tools.
– Worker support: Develops principles to mitigate AI's harmful effects on jobs and
workplaces, including addressing job displacement and workplace equity.
– Promoting innovation and competition: Catalyzes AI research across the U.S.,
encourages a fair and competitive AI ecosystem and facilitates the entry of skilled AI
professionals into the U.S.
– Global leadership in AI: Expands international collaboration on AI and promotes the
development and implementation of vital AI standards with international partners.
– Government use of AI: Ensures responsible government deployment of AI by issuing
guidance for agencies' use of AI, improving AI procurement and accelerating hiring of
AI professionals.
Levels of AI governance
AI governance doesn't have universally standardized "levels" in the way that, for
example, cybersecurity might have defined levels of threat response. Instead, AI
governance has structured approaches and frameworks developed by various entities
that organizations can adopt or adapt to their specific needs.
Organizations can use several frameworks and guidelines to develop their governance
practices. Some of the most widely used frameworks include the NIST AI Risk
Management Framework, the OECD Principles on Artificial Intelligence, and the European
Commission's Ethics Guidelines for Trustworthy AI. These frameworks provide guidance
for a range of topics, including transparency, accountability, fairness, privacy, security
and safety.
The levels of governance can vary depending on the organization's size, the complexity of
the AI systems in use and the regulatory environment in which the organization operates.
Informal governance
This is the least intensive approach to governance based on the values and
principles of the organization. There may be some informal processes, such as
ethical review boards or internal committees, but there is no formal structure or
framework for AI governance.
Ad hoc governance
This is a step up from informal governance and involves the development of
specific policies and procedures for AI development and use. This type of
governance is often developed in response to specific challenges or risks and
may not be comprehensive or systematic.
Formal governance
This is the highest level of governance and involves the development of a
comprehensive AI governance framework. This framework reflects the
organization's values and principles and aligns with relevant laws and
regulations. Formal governance frameworks typically include risk assessment,
ethical review and oversight processes.
1. Visual dashboard: Utilize a dashboard that provides real-time updates on the health
and status of AI systems, offering a clear overview for quick assessments.
2. Health score metrics: Implement an overall health score for AI models using intuitive
and easy-to-understand metrics to simplify monitoring.
3. Automated monitoring: Employ automatic detection systems for bias, drift,
performance and anomalies to ensure models function correctly and ethically.
4. Performance alerts: Set up alerts for when a model deviates from its predefined
performance parameters, enabling timely interventions.
5. Custom metrics: Define custom metrics that align with the organization's key
performance indicators (KPIs) and thresholds to ensure AI outcomes contribute to
business objectives.
6. Audit trails: Maintain easily accessible logs and audit trails for accountability and to
facilitate reviews of AI systems' decisions and behaviors.
7. Open-source tools compatibility: Choose open-source tools compatible with various
machine learning development platforms to benefit from the flexibility and
community support.
8. Seamless integration: Ensure that the AI governance platform integrates seamlessly
with the existing infrastructure, including databases and software ecosystems, to
avoid silos and enable efficient workflows.
SR-11-71 is the U.S. regulatory model governance standard for effective and strong
model governance in banking. The regulation requires bank officials to apply company-
wide model risk management initiatives and maintain an inventory of models
implemented for use, under development for implementation, or recently retired.
Leaders of the institutions also must prove their models are achieving the business
purpose they were intended to solve, and that they are up-to-date and have not drifted.
Model development and validation must enable anyone unfamiliar with a model to
understand the model’s operations, limitations and key assumptions.
In April 2021, the European Commission presented its AI package3, including statements
on fostering a European approach to excellence, trust and a proposal for a legal
framework on AI. The statements declare that while most AI systems will fall into the
category of "minimal risk," AI systems identified as "high risk" will be required to adhere
to stricter requirements, and systems deemed "unacceptable risk" will be banned.
Organizations must pay close attention to these rules or risk fines.
AI governance guidelines in the Asia-Pacific region
In the Asia-Pacific region, countries have released several principles and guidelines for
governing AI. In 2019, Singapore’s federal government released a framework with
guidelines for addressing issues of AI ethics in the private sector. India’s AI strategy
framework recommends setting up a center for studying how to address issues related to
AI ethics, privacy and more. China, Japan, South Korea, Australia and New Zealand are
also exploring guidelines for AI governance.4
Related solutions
watsonx
Easily deploy and embed AI across your business, manage all data sources, and
accelerate responsible AI workflows—all on one platform.
Explore watsonx
IBM OpenPages
Simplify data governance, risk management and regulatory compliance with IBM
OpenPages — a highly scalable, AI-powered, and unified GRC platform.
Explore OpenPages
Related resources
Footnotes
1. “SR 11-7: Guidance on Model Risk Management.” (link resides outside ibm.com), Board of Governors of the Federal Reserve System
Washington, D.C., Division of Banking Supervision and Regulation, 4 April 2011.
2. “Canada's New Federal Directive Makes Ethical AI a National Issue.” Digital, 8 March 2019.
3. "A European approach to artificial intelligence." (link resides outside ibm.com), European Commission, 14 December 2023.
4. “Wrestling With AI Governance Around The World.” (link resides outside ibm.com), Forbes, 27 March 2019.