Implementing Generative Ai With Speed and Safety
Implementing Generative Ai With Speed and Safety
Implementing Generative Ai With Speed and Safety
March 2024
Generative AI (gen AI) presents a once-in-a- That unease is understandable. The risks associated
generation opportunity for companies, with the with gen AI range from inaccurate outputs and
potential for transformative impact across innovation, biases embedded in the underlying training data to
growth, and productivity. The technology can now the potential for large-scale misinformation and
produce credible software code, text, speech, high- malicious influence on politics and personal well-
fidelity images, and interactive videos. It has being. There are also broader debates on both
identified the potential for millions of new materials the possibility and desirability of developing AI in
through crystal structures and even developed general. These issues could undermine the
molecular models that may serve as the base for judicious deployment of gen AI, potentially leading
finding cures for previously untreated diseases. companies to pause experimentation until the risks
are better understood—or even deprioritize the
McKinsey research has estimated that gen AI has technology because of concerns over an inability to
the potential to add up to $4.4 trillion in economic manage the novelty and complexity of these issues.
value to the global economy while enhancing the
impact of all AI by 15 to 40 percent.1 While many However, by adapting proven risk management
corporate leaders are determined to capture this approaches to gen AI, it’s possible to move
value, there’s a growing recognition that gen AI responsibly and with good pace to capture the value
opportunities are accompanied by significant risks. of the technology. Doing so will also allow companies
In a recent flash survey of more than 100 organiza to operate effectively while the regulatory environ
tions with more than $50 million in annual revenue, ment around AI continues to evolve, such as with
McKinsey finds that 63 percent of respondents President Biden’s executive order regarding gen AI
characterize the implementation of gen AI as a “high” development and use and the EU AI Act (see sidebar,
or “very high” priority.2 Yet 91 percent of these “The United States moves to regulate AI”). In addition,
respondents don’t feel “very prepared” to do so most organizations are likely to see the use of gen
in a responsible manner. AI increase “inbound” threats (risks likely to affect
On October 30, 2023, the Biden administration released a long-awaited executive order aimed at addressing concerns related to AI
development in economic, national-security, and social domains. The order establishes principles, tasks federal agencies with AI-testing
methods, codifies government oversight of private AI development, and outlines AI’s impact on national security and foreign policy:
— Holistic AI governance. The order — Private sector accountability. The — Cross-sector impact. The order
establishes a comprehensive frame order mandates that private companies addresses various sectors affected by
work for AI governance, emphasizing involved in AI adhere to industry AI, including critical infrastructure,
ethics, safety, and security. It addresses standards, report on compliance, cybersecurity, education, healthcare,
the importance of responsible inno and implement best practices. national security, and transportation.
vation, collaboration, and competition This includes meeting specific It promotes interagency collaboration
in the AI industry. guidelines on transparency and to integrate AI responsibly and
accountability, especially for securely across these sectors, aligning
dual-use foundation models and government and industry efforts
large-scale computing clusters. for societal benefit.
1
“The economic potential of generative AI: The next productivity frontier,” McKinsey, June 14, 2023.
2
Unpublished data from McKinsey survey results.
1. Launch a sprint to understand the risk of This article provides a blueprint for developing an
inbound exposures related to gen AI. approach to implementing gen AI responsibly.
Following these steps helps organizations move
2. Develop a comprehensive view of the materiality quickly to scale the technology and capture its
of gen-AI-related risks across domains and use benefits while minimizing their exposure to the
cases, and build a range of options (including potential downsides.
both technical and nontechnical measures) to
manage risks.
Understanding and responding to
3. Establish a governance structure that balances inbound risks
expertise and oversight with an ability to In our experience, including through building
support rapid decision making, adapting McKinsey’s own gen AI application, gen-AI-related
existing structures whenever possible. risks can be captured in eight main categories
(Exhibit 1). These categories consider both inbound
4. Embed the governance structure in an operating risks and risks that directly result from the adoption
model that draws on expertise across the of gen AI tools and applications. Every company
organization and includes appropriate training should develop some version of this core taxonomy
for end users. to support understanding and communication on
the risks arising from the implementation of gen AI.
3
Security Intelligence, “AI may soon defeat biometric security, even facial recognition software,” blog entry by Mike Elgan, January 31, 2019.
4
For more, see “Technology’s generational moment with generative AI: A CIO and CTO guide,” McKinsey, July 11, 2023.
Security threats Vulnerabilities in gen AI systems (eg, payload splitting to bypass safety
filters, manipulability of open-source models)
Third party Risks associated with use of third-party AI tools (eg, proprietary data
being used by public models)
Generative AI.
1
Deciding how to respond to inbound risks is — intellectual property (IP) infringement, resulting
a focus for many executive teams and boards. This from IP (such as images, music, and text) being
decision should serve as a foundation for how scraped into training engines for underlying
an organization communicates about gen AI to its large language models and made accessible to
employees and stakeholders. It should also inform anyone using the technology
the approach to use cases.
Most organizations will benefit from a focused sprint
We see four primary sources of inbound risk from to investigate how gen AI is changing their external
the adoption of gen AI: environment, with two primary objectives. The first
is to understand potential exposures to inbound
— security threats, resulting from the increased risks, anchored in the organization’s risk profile (for
volume and sophistication of attacks from gen- example, how many third parties have access to
AI-enabled malware sensitive or confidential data that need to be
restricted from training external gen AI models). The
— third-party risk, resulting from challenges second objective is to understand the maturity and
in understanding where and how third parties readiness of the control environment—the technical
may be deploying gen AI, creating potential and nontechnical capabilities the organization has
unknown exposures in place to prevent, detect, and ultimately respond
to inbound risks. These include cyber and fraud
— malicious use, resulting from the potential for defenses, third-party diligence to identify where
bad actors to create compelling deepfakes critical third parties may be deploying gen AI,
of company representatives or branding that and the ability to limit the scraping of company IP by
result in significant reputational damage engines used to train large language models.
Web 2024
McKQ-GenAIAndRisk
Exhibit
Exhibit 2 of25 (digital); 2 of 4 (print)
Different generative AI use cases are associated with different kinds of risk.
Primary risk
Generative AI Impaired IP1 Data privacy Malicious Security Performance and
use case fairness infringement and quality use threats ‘explainability’ Strategic
Customer journeys
(eg, chatbots for
customer services)
Concision
(eg, generating
content summaries)
Coding
(eg, generating
or debugging code)
Creative content
(eg, developing
marketing content)
Intellectual property.
1
Web 2024
McKQ-GenAIAndRisk
Exhibit 3 5 (digital); 3 of 4 (print)
Exhibit 3 of
Organizations that deploy generative AI use cases can create a heat map
ranking the potential severity of various categories of risk.
Risk severity IP1 Malicious Performance and Third
Low Medium High infringement use explainability party
Impaired Data privacy Security
Use case fairness and quality threats Strategic
Customer AI financial advisers for
journeys individualized advice
Intellectual property.
1
Web 2024
McKQ-GenAIAndRisk
Exhibit 4 5 (digital); not in print cut
Exhibit 4 of
User asks Chatbot clarifies Prompt is Bot searches Bot generates Answer is
question question enriched relevant knowledge response consumed
1 2 3 4 5 6
User asks, Model seeks to Model enriches Model searches Model generates User receives
“What are my better understand query by retrieving for relevant response (explicitly response and,
vision care user’s needs by templates from a information to citing HR documents thanks to
benefits?” asking for more prompt library (all answer the prompt used to compose citations, can
data: templates have been it) and conducts easily verify
• Permanent or tested against per- final set of risk answer and
temporary formance bench- and quality checks dig deeper
employee? marking); after if necessary
• Eyewear or receiving updated
checkups? prompt, user is
• State of asked to confirm
residence? that query has been
• Benefits plan parsed accurately
subscription?
Web 2024
McKQ-GenAIAndRisk
Exhibit 5 of 5 (digital); 4 of 4 (print)
Exhibit 5
Moving with speed while mitigating risk often requires revised governance.
Establish a cross- Hire an AI governance officer to propel Develop AI guidelines and policies,
functional, responsible centralization needed for consistent AI agreed upon by the executive team and
gen AI1 steering group policies and standards and to keep board, to guide responsible company-wide
internal control system updated AI adoption and use cases
Governance tools
Governance culture
Generative AI.
1
— Engineers. Engineers are technical experts who An operating model should account for how the
Scan • Download • Personalize
understand the mechanics of gen AI. They different personas will interact at different stages of
develop or customize the technology to support the gen AI life cycle. There will be natural variations
the gen AI use cases. Just as important, for each organization, depending on the specific
they’re responsible for guiding on the technical capabilities embedded in each of the personas. For
feasibility of mitigations and ultimately coding example, some organizations will have more tech
the mitigations to limit risk, as well as developing nical capabilities in designers, meaning they may have
technical-monitoring strategies. a more active delivery role. But the intent of the
operating model is to show how engagement varies
— Governors. Governors make up the teams that at each stage of deployment.
help establish the necessary governance,
processes, and capabilities to drive responsible
and safe implementation practices for gen AI.
These include establishing the core risk frame Gen AI has the potential to redefine how people work
works, guardrails, and principles to guide the and live. While the technology is fast developing, it
work of designers and engineers and challenging comes with risks that range from concerns over the
risk evaluation and mitigation effectiveness completeness of the training data to the potential
(especially for higher-risk use cases). The AI of generating inaccurate or malicious outputs.
governance officer is a prime example of this Business leaders need to revise their technology
persona, although the role will need to be playbooks and drive the integration of effective
complemented with others, given the range of risk management from the start of their engagement
potential risks. These roles will ideally cover with gen AI. This will allow for the application of this
data risk, data privacy, cybersecurity, regulatory exciting new technology in a safe and responsible way,
compliance, and technology risk. Given the helping companies manage known risks (including
nascency of gen AI, governors will often need inbound risks) while building the muscles to adapt to
to coordinate with engineers to launch “red unanticipated risks as the capabilities and use
team” tests of emerging use cases built cases of the technology expand. With major potential
on gen AI models to identify and mitigate uplift in productivity at stake, working to scale
potential challenges. gen AI sustainably and responsibly is essential in
capturing its full benefits.
Oliver Bevan is a partner in McKinsey’s Chicago office; Michael Chui is a partner in the Bay Area office, where Brittany Presten
is an associate partner and Lareina Yee is a senior partner; and Ida Kristensen is a senior partner in the New York office.