Gartner Business-Quarterly 3q2023
Gartner Business-Quarterly 3q2023
Gartner Business-Quarterly 3q2023
and Now
AI, which uses knowledge graphs and causal networks for
Nearly half of more than 2,500 executives we polled said
optimal efficiency — tamping down the models’ voracious
they have been planning to spend more on AI of all kinds
appetite for energy. And buy clean power where you can.
since ChatGPT rolled out in late 2022. And separately,
dozens of executives have told us in conversations Company tactics come from ABB and HP Inc., while
that they want to move faster than they did on past important context emerges in our Q&A interviews with
Letter From the Editor AI initiatives and get something into production EU lawmaker Dragos Tudorache and decision-making
within a year. experts Daniel Kahneman and Olivier Sibony.
Their biggest question: What steps must we take over the GBQ helps you and your team align with others and
next 12 months so we don’t have to play catchup for years reach peak effectiveness, so your enterprise can achieve
to come — while protecting against pitfalls known and its goals, be bold and principled, and bring employees,
unknown? investors and the public along for the ride.
This issue of Gartner Business Quarterly will help you Our standing departments keep you up to speed — Cutting
act now. It’s packed with use cases along with guidance Edge is a look at provocative new data; Briefs offer short
for considering investments, enticing customers, and takes about smarter spending and planning, talent and
rethinking your talent strategy. Just as importantly, culture, growth and innovation, and data and technology.
you’ll find analysis of very real dangers ahead, some
We welcome your feedback. Please contact me at
that have hit the headlines and some that have not.
judy.pasternak@gartner.com.
— Judy Pasternak
Departments
«
17 The Whiteboard:
48 How Generative AI
Big Questions About How
to Invest in Generative AI
Can Help Meet Customer
Experience Expectations 05 The Cutting Edge 3Q23:
Cool New Data Points
26 4 Decisions to Make
When Creating a 53 Learn to Serve Your
AI-Powered Customers 72 Briefs:
Quick Takes on Fresh Research
Generative AI Policy Before They ‘Walk Away’
n varies
Q. What kind of changes, if any, are you making in response to continuing globalization/reglobalization/
slowbalization or deglobalization?
Source: 2023 Gartner CEO and Senior Business Executive Survey
Note: Respondents could choose from the following scenarios on globalization: globalization is continuing but changing,
as power shifts between major nations (reglobalization); globalization is slowing down (slowbalization); globalization is
continuing much the same as it has for decades; and globalization is reversing (deglobalization).
Table
Table of
of Contents
Contents Gartner Business Quarterly 3Q23 | 5
The Cutting Edge 2023
When Making a Strategic Pivot, Involving ERM Boosts the Chance of Success Despite Reports of ESG Backlash, Business Leaders Have Increased Commitments
Enterprise risk management teams can discuss relevant risk information, run scenario Almost three-quarters (72%) say they are building out their programs.
planning workshops, and map out and review the disruption response plans.
Impact of Consulting ERM on Strategic Pivot Outcomes Business Leaders Who Report Increased GC-Cited Examples of Increasing ESG
Percentage of Respondents Selecting ESG Commitments Commitments at their Organizations
Across the Past 12 Months
By Percentage Bringing on dedicated FTEs
Respondents who consult ERM in ESG-specific roles
are 19% more likely to achieve
intended pivot objectives. Increased ESG funding and/or
62% 28%
No establishing a dedicated ESG
budget
43%
Establishing ESG committees
and/or working groups
Nearly Half of Organizations Use Biometric Verification Even Top Performers Struggle With Feelings of Futility About Doing ‘Enough’
Companies are turning to biometric verification just as more jurisdictions plan Your top people need performance confirmation, not just conversation. Make sure you
to regulate it. To manage this risk, legal and privacy should work with information reward and recognize great achievements in real time to keep employees motivated.
security and the functions using biometrics to check how data will be stored and
protected, and how consent will be obtained.
Does Your Organization Use Biometric Verification? Prevalence of High Futility by Recent Performance Rating
Percentage of Employees Experiencing High Futility
Futility
48% • “I feel anxious about whether I am performing”
45% • “Reaching high performance feels hopeless”
5% for customers only • “Someone will always do more than me”
12% for customers and employees Does Not Meet Expectations 33%
Only Half of Boards Are Effective — But the GC Can Help Three Steps to Improve Board Operations
While boards face a host of challenges — whether it’s increased regulatory action, When general counsel take steps to make the board more effective, they play a big
heightened cybersecurity risk, or rising geopolitical tensions — there’s clearly room role in helping companies navigate an ever-shifting web of economic, social and
for improvement. geopolitical pressures.
Percent of Organizations With Effective Boards Maximum Impact of Each Factor on Board Effectiveness
Percent of Respondents Scoring an Average ≥6 on the 7-Point Board Effectiveness Indexa
53%
n = 92 n = 92
Source: 2023 Gartner Corporate Governance and Board Management Benchmark Survey R2 = 55%
a
Board Effectiveness Index combines Executive Accountability and Board Oversight of Strategy and Risk. Source: 2023 Gartner Corporate Governance and Board Management Diagnostic Survey
Source: Gartner
© 2023 Gartner, Inc. and/or its affiliates. All rights reserved. CM_GBS_2412262
Balance the Environmental Perils
and Promises of Generative AI
by Kristin Moyer, Sarah Watt and Pieter den Hamer
Mitigate Risks
as well as feasibility (the level of difficulty and cost to
implement the use case) Avoid stranded assets
• Negative environmental impact — As measured by Develop organizational sustainability policies
GHG emissions and electricity and water consumption Develop sustainability KPIs
We evaluated sustainability use cases, employing general- Support decision making
Optimize Costs
purpose generative AI tools (such as ChatGPT) based Improve corporate sustainability communications
on a combination of primary and secondary research,
Engage supply chain partners
proprietary data and analyst experience (see Figure 1).
Train employees
Embed GenAI in products to make them more sustainable
Discover alternative resources and materials
Drive Growth
What is up with AI: What is next with AI: What else with AI:
Introduction to some Future AI scenarios More insights on
fundamental issue through 2033 the impacts of AI
Download Research
© 2023 Gartner, Inc. and/or its affiliates. All rights reserved. CM_GTS_2450512
The Whiteboard:
Big Questions About How
to Invest in Generative AI
by Van Baker, Erick Brethenoux and Arun Chandrasekaran
Contributions by Jeff Cribbs, Meg Day, Ellen Eichhorn, Dennis P. Gannon, Dan Gottlieb, Robert Hetu,
Grant Faulkner Nelson, Pedro Pacheco, Eser Rizaoglu, Moutusi Sau, Steve Shapiro, Tony Sheehan,
Jasleen Kaur Sindhu, Lauren Smith, Marco Steecker, Noha Tohamy and Shubhangi Yadav
he Stakeholders: Who is
n : H ow might t likeliest to be impacted?
Vis io by Protection:
s created
capabilitie t our strategy?
ac • How will we
GenAI imp ary
safeguard propriet
Processes: Which ones will we ?
and sensitive data
need to transform or create?
will
• What disclosures
Benefits: What new we need to make?
re:
opportunities and System architectu
GenAI to
capabilities will it present? How will we train
tructure?
leverage our infras
Prevention:
Objective: What are we Data: What data is available to • Who will be accountable
trying to gain by applying feed into GenAI? How much is for GenAI’s outputs, and to
it to this problem? there? Does it have sufficient what degree?
depth (i.e., metadata)? • What governance structures
will we need to detect and
Image generated using DALL-E, https://openai.com/dall-e-2.
respond when it acts in
Adoption: How will staff Skills: What skills are required to unexpected ways?
react to its use? iterate and collaborate with this tool?
AI frontline co-pilot: Chat interface Conversational patient Scientific literature discovery: Student tutors: Conversational
helps client-facing employees get self — triage and checking LLMs help scientists identify UI to support personalized
important information faster symptoms: Chatbot makes relevant research, extract learning
suggestions and guides patients insights, aggregate findings and
regarding acute symptoms, generate new hypotheses Language training: AI reading
GPT-4 chronic condition management, and speaking companion
Morgan Stanley is training 5 Coding assistant for mainframe
ors. health and wellness activities,
to help its financial advis support: Helps software Faculty assistant: Accelerate
or behavioral health needs
developers generate, test, authoring of quizzes, tests,
Auto-composition of clinical debug code snippets in presentation materials,
Compliance and regulatory monitoring: messages: Automatic replies languages common to mainframe curricula, lesson plans,
Assist in verifying communications with based on content and tone technologies, like COBOL — often feedback, student referral letters
clients against internal codes and rules of patient message, accessible used in U.S. healthcare payers’
clinical data, and clinician’s claims processing systems Virtual student assistant:
Personalized customer support: tone and preferences Chat interface to integrated
Recommendations for contact center Consultative population student data
agents and relationship managers based on health analytics: Users ask
customer profile, needs and expectations Mass General plain language questions of a Student recruitment/
Brigham, a he
care system in alth report or dashboard in areas like enrollment/persistence:
the U.S., is test
Claims management: Individualized generative AI ing population health, costs and care Including nudging students
for patient port
suggestions/explanations on claims coverage messages and a l activities toward course completion
clinical notes.6
and applicant-friendly reasons for denials
Corporate leaders have been grappling in 2023 Based on our analysis of AI policies already instituted by companies and
city governments, the GC should direct organizations to consider the
with how to manage workplace use of generative
following questions before establishing a policy:
AI applications such as ChatGPT and Bard.
1. What is our risk tolerance for use of generative AI?
Though some of the most prominent voices in generative AI development
2. What restrictions should we put in place to mitigate risks — and how
have warned that these products need close oversight and regulation, that
will those differ for publicly available applications (such as OpenAI’s
task for now falls to the general counsel and C-suite colleagues who will
ChatGPT) versus generative AI models tailored to our business needs?
lead implementation or use of the new technologies.1,2
3. Who has the authority to make decisions on generative AI use?
Government requirements do loom on the horizon — the European Union,
4. What information do we have to share, and with whom?
China, the U.S., the U.K. and Canada are all working on rules and guidance
now. Having policies in place will also prepare your enterprise for whatever
measures authorities enact in the future.
Even before large language models (LLMs) Although lawyers specializing in privacy, cybersecurity EU Regulation May Influence Others
and AI told us the rules in the EU and Canada may not
took the world by storm, the number of The EU’s proposed AI law, first put forward in April 2021
take effect until at least 2025, considering what is under
enterprises deploying AI was on the rise.1 discussion now will prevent companies from running into and still under discussion, builds on the GDPR. Like that
But the public launch of ChatGPT and trouble with regulators later. China announced in July that regulation, lawyers and business leaders expect it to
other LLM applications catalyzed both AI interim regulations will take effect on 15 August, 2023.2 become a roadmap for other countries to follow.3,4,5,6
Elsewhere, Canada is weighing the Artificial Intelligence
investment and global regulatory efforts. Two of those proposals contain fines for noncompliance; and Data Act (AIDA), and China has interim rules for
The general counsel (GC) can prepare senior leaders maximum penalties in the EU’s proposed AI law exceed generative AI.7,8,9 Both the U.S. and the U.K., meanwhile,
and the board even as the rules come into focus. A close the attention-grabbing fines in the bloc’s General Data have issued guidance in the form of AI white papers
reading of proposed and new regulations and guidance Protection Regulation (GDPR). Even if authorities don’t (see Figure 1).10,11
— in the EU, Canada, China, the U.S. and the U.K. — reveals impose the highest possible fines, companies need
shared underlying principles. The GC can help organizations defensible AI oversight and risk management that enable
develop corporate AI strategy by prioritizing them. innovation and experimentation.
Release
April 2021 June 2022 October 2022 March 2023 July 2023
Date
“... providers placing Companies that “design Guidelines “We will not assign rules or Only applicable to services
on the market or or develop a high-impact apply to the risk levels to entire sectors accessible to the general
putting into service AI system” or “make a public and or technologies. Instead, public within China
Scope AI systems** in the high-impact AI system private sectors. we will regulate based on
Union.” available for use” or the outcomes AI is likely
“manage the operations to generate in particular * Nonbinding government guidelines
of an AI system.”*** applications.” ** The EU AI Act defines an AI system as “software that
is developed with one or more of the techniques and
approaches listed in Annex I and can, for a given set of
€30 million or CA$25 million or up to Not applicable Not applicable Not applicable human-defined objectives, generate outputs such as
Maximum up to 6% of total 5% of global revenue6 content, predictions, recommendations or decisions
influencing the environments they interact with.”6
Penalties worldwide annual *** Canada’s draft act states that “high-impact AI systems”
turnover5 will be defined by forthcoming regulations to account
for what’s laid out in the EU’s AI Act and technological
advances. Among the potential considerations:
Effective whether the system will impact human rights and
To be determined To be determined Not applicable Not applicable 15 August 2023
Date whether opting out is not possible.7
Source: Gartner
»Figure 2. Common Principles in Global AI Regulations and Guidance One way that lawyers we interviewed suggest organizations should prepare: Disclose
AI use in marketing content and the hiring process. At a bare minimum, legal leaders
can help by updating the privacy notices and terms and conditions on their company’s
website to reflect AI use. But it’s better to develop a separate section on the organization’s
Transparency Risk Management online “Trust Center.” Or post a point-in-time notice when collecting data that specifically
discusses the ways the organization uses AI, assures individuals that their privacy rights
won’t be negatively impacted and, just as important, makes clear that such use will deliver
value to the customer or individual.
Assess whether your company needs stand-alone AI guidelines. Organizations such as IBM
make it easy for clients and customers to understand how they are using AI by linking their
Governance,
Including Data Privacy stand-alone AI policy to the code of conduct, IT security policy and privacy policy.12
Accountability and
Human Oversight Let departments working on AI initiatives know they can help the business avoid privacy
risk by being transparent from the start. Otherwise, legal and privacy leaders risk finding
out about such projects only when they are finalized.
Source: Gartner
The risk management system shall consist of a continuous iterative process Human oversight shall aim at preventing or minimizing the risks to health,
run throughout the entire life cycle of a high-risk AI system, requiring regular safety or fundamental rights that may emerge when a high-risk AI system
systematic updating.6,13 is used.6
EU’s proposed AI law EU’s proposed AI law
Like the proposed EU legislation, China also requires ongoing monitoring. China’s interim Keep humans in the loop of AI development; regulatory measures call for it. That means
measure states, “Before using generative AI products to provide services to the public, establishing controls for humans to view, explore and calibrate AI system behavior. Where
a security assessment must be submitted to the state cyberspace and information possible, these systems should be able to spell out why a particular result was achieved —
department [i.e., the Cyberspace Administration of China].” Then organizations must register sometimes called explainable AI — so users understand AI decisions. Legal leaders should
the algorithm on the official government website.2 mandate that third-party solution procurement and internal AI initiative development include
the use of human-in-the-loop tactics to provide explainability and decrease risk.
The emerging practice of an algorithmic impact assessment (AIA) can document decision
making, demonstrate due diligence, and reduce present and future regulatory risk and AI-mature organizations are much more likely to involve legal teams in the AI development
other liability. Creating an AIA must be a cross-functional endeavor. Besides legal, GC process, specifically in coming up with ideas for AI use cases.17
should involve information security, data management, data science, privacy, compliance
The GC could also establish a digital ethics advisory board of legal, operations, IT,
and the relevant business units to get a fuller picture of risk. Since legal leaders typically don’t
marketing and outside experts to help project teams manage ethical issues.18 The White
own the business process they recommend controls for, consulting the relevant business units
House blueprint notes that independent ethics committees can both review initiatives
is vital. The White House blueprint calls for organizations to make such assessments “public
in advance and monitor them to check whether “any use of sensitive data” infringes on
whenever possible.”10
consumer rights.10
Canada’s existing Directive on Automated Decision Making, which requires Canadian
government organizations to conduct AIAs, includes a tool that guides organizations
through the process — starting with an assessment of risk areas such as the reasons
for automation as well as the source and type of data used.14,15,16
»Figure 3. Discussion Guide on the Opportunities and Risks of LLMs The advent of ChatGPT has also led national regulators within the EU to act. In May 2023,
France’s data protection authority, the Commission Nationale de l’Informatique et des
Libertés (CNIL), announced it will create a new department focusing on AI.22 The CNIL
Opportunities for Using LLMs
wants AI development to respect personal data rights such as individuals’ rights of access,
• Are we targeting the right LLM business opportunities? rectification and opposition.
• How do we identify appropriate LLM use cases for our business while Legal and compliance leaders should manage privacy risk by applying privacy-by-design
balancing risk with reward? principles to AI initiatives. For example, require privacy impact assessments early in the
• What are our competitors doing? project or assign privacy team members at the start to assess relevant risks. Better still,
work with business partners such as the project management office to build privacy impact
assessment requirements directly into project life cycles.
Risk Mitigation Plan
With public versions of LLM tools, organizations should alert the workforce that any
• Are there any risks we should not accept regarding LLMs?
information they enter may become part of the training dataset. That means sensitive or
• Are we taking an appropriate amount of risk? proprietary information used in prompts could find its way into responses for users outside
• Do we need any additional controls to mitigate risk? the business. Leaders must establish guidelines, inform staff of the risks involved and
provide direction on how to safely deploy such tools.
• How much oversight of LLMs do directors want?
Source: Gartner
a Moral Compass’
Dragos Tudorache co-leads the European Parliament’s
work on the AI Act. A lawyer who began his career as a
judge in his native Romania, Tudorache was elected to
the European Parliament in 2019. Along with artificial
intelligence and new technologies, his legislative work
A Q&A With Dragos Tudorache, Co-Architect of the EU’s AI Act includes security and defense, transatlantic issues and
by Laura Cohn internal affairs. Prior to becoming an MEP, he headed
legal departments at the Organization for Security and
Co-operation in Europe and the United Nations Mission
in Kosovo.
Table of Contents
Q&A
In a video interview from Brussels, When it comes to AI regulation, a critical tenet across That’s how it was trained. That’s how it was instructed.
jurisdictions globally is transparency. In your view, That’s how it was tweaked to work. And then I can see
Dragos Tudorache discussed what
what does good transparency look like? What do whether there is anything wrong in the way it was done
business should home in on to prepare companies need to reveal? or not.
for AI regulation, lessons from the GDPR,
How your algorithm works, how you trained it, how you What transparency obligations do you see for
and his thoughts about warnings from developed it, how you instructed it to function, and to producers of large language model applications?
tech leaders that AI could lead to human actually reach its decisions or recommendations, or
extinction. This interview has been edited These systems need a moral compass. And that’s exactly
whatever it is that it generates as content. Is it something
what we’re saying with this text to developers of large
for length and clarity. that induces risk?
language models. They are great and fascinating, they
What do organizations need to do How can organizations explain their algorithms are beautiful products, no doubt about it — but they
now to prepare for the EU’s AI Act? — and where should they do that? need a moral compass. They need some rules and you
have a responsibility to proactively, in the design and
Well, if you are a company that uses AI in industrial The regulation will provide templates for how you can development, introduce safeguards that the content that
processes, in optimizing flows in a factory, in things comply with the technical data that you have to provide. is going to come out of your machine is not harmful and
that have nothing to do with humans, let’s say, then you There will be what is called an EU-wide AI registry, which is against the law. That’s one obligation and the second has
shouldn’t even bother asking your lawyers to read the text. going to be a public database where high-risk applications to do with copyrighted material.
of AI will have to publish all of this data that they have to
If you are developing AI in areas that have to do with us provide as part of their transparency obligations. What’s the obligation for companies there?
humans — whether it’s recruitment, whether it’s medical,
whether it’s AI that is optimizing decision making — then So if I want to understand how the algorithm works, and Large language models are also absorbing a lot of
it is likely that your AI is influencing environments that we the kind of data sets used, and what kind of instructions copyrighted material, whether it’s scientific articles,
believe need to be protected, and therefore you will need it received, then I open that, and look inside and say, ‘Aha.’ songs by ABBA or works of art.
to ask your lawyers to start looking at this text.
Developers have an obligation to be transparent about the For the EU’s AI Act, how important are That’s something that’s one step too far, according to how
copyright material that they use. So that if I’m Drake and the biometric verification safeguards? we understand privacy here in Europe.
the Weeknd, I need to know that ChatGPT actually learned
This has been, and remains, the most ideological point in In terms of future enforcement of the AI Act, have
my songs. So in case it produces a song that very much
the text. Many misunderstand what we’re doing by saying you learned any lessons from the EU’s General Data
resembles mine, then I can go and knock on the door and
that we’re going to inhibit biometric technology. No, we’re Protection Regulation (GDPR)?
say, ‘Fella, you need to give me some dollars for that.’1
not. Biometric technology will remain. The one thing that
cannot be done — and that has less to do with companies We’re certainly trying to learn some lessons. First of all,
Let’s talk about human oversight. Do companies need to
developing it, but more to do with law enforcement in terms of avoiding the silo effect of different national
assign an AI point person, team or committee to review
agencies in Europe — is, you cannot put an algorithm to regulators without much coordination, we are introducing
any outputs from AI, for example?
run live, 24/7, in public spaces. We don’t want this a stronger mechanism of ensuring coherence in the
We have deliberately tried not to be overly prescriptive. in Europe. implementation of the law across the EU. That’s number
This is in order to leave room for companies to achieve one. Number two, we put some teeth in this law.
compliance in the best way they see fit. Why is this so critical?
Regulators will have the possibility to investigate
But can you give business leaders some examples of We don’t want to give a quasi-invitation to law algorithms, to request information, if it’s not clear from
how companies can provide effective human oversight? enforcement to run these systems all the time on the the data that companies have provided as part of the
grounds that maybe a criminal or missing child might pass transparency exercise. So regulators can go in, knock
I can’t tell companies exactly what to do. But you have to by on the street. on the door, and investigate the algorithm further. And if
have a human in the loop. Which means what? That I can’t they find infringements, basically the options can go from
let an algorithm be the sole deciding factor without having, How do you prevent abuse? And also, how do you protect
shutting down the AI system, to withdrawing it from the
at some point, a human who takes a look. So I need to privacy? Because basically what it means is that I will
market, to requesting changes, to very significant fines.
have someone, a person, who can have that final human know that I will be walking in the street and there’s always
touch to validate its content, its product. someone who will biometrically identify me.
You were in Washington recently, talking to U.S. officials. What was your take on the message from OpenAI’s So for me, their plea confirms that we have done the right
What’s the outlook for international cooperation on Sam Altman and other tech leaders that AI could lead thing in writing up some rules.
AI regulation? to human extinction, and that mitigating the risks
“should be a global priority?”2
Unlike the GDPR, where major jurisdictions were not 1
An A.I. Hit of Fake ‘Drake’ and ‘The Weeknd’ Rattles the Music World,
ready to accept the same level of protection as we were It’s interesting that someone like Sam Altman tells you that nytimes.com
introducing in Europe, my feeling from talking to literally there is a risk of extinction. Compared to any other piece 2
A.I. Poses ‘Risk of Extinction,’ Industry Leaders Warn, nytimes.co
almost every democratic jurisdiction out there is that with of digital legislation that we have worked on until now,
AI, there is already a much better starting point. There is this time around I’ve heard businesses big and small say,
quite a lot of convergence in terms of understanding the ‘Listen, we think it’s time to have some rules in place.’ It is
challenges. good that they understand the responsibility they have.
Also, as tech leaders, it is good that they engage on this
Will all of them adopt an act like we do? Most likely not, and put pressure on lawmakers to start making decisions.
but my narrative has always been that this matters less.
As long as we agree on the big principles and on the
big political objectives and what we want to achieve, They know better than anyone else what’s behind
then you can accept diversity in terms of the type of this technology. If they sound alarm bells, that
legislation in place. means that our instincts in putting forward rules
were the right instincts.
Find out how generative Explore the impact of Determine the next
AI tools could reshape generative AI. steps for CMOs.
marketing.
Watch now to find out what CMOs can and should do to take advantage of generative AI.
© 2023 Gartner, Inc. and/or its affiliates. All rights reserved. CM_GBS_2461540
Generative AI Will Affect
Information Security (and CISOs)
by Jeremy D’Hoinne, Avivah Litan and Peter Firstbrook
Generative cybersecurity AI. Safe consumption of Development of new The evolving threat
Exploit opportunities to generative AI. ChatGPT was generative AI applications landscape. Malicious actors
better manage risks, optimize the first example; embedded for the enterprise. These will refine their tactics and
resources, defend against generative AI assistants in techniques have an leverage generative AI where
emerging attack methods or existing applications will be expanded attack surface it is a good fit.
reduce costs. the next. These applications and require adjustments
all have unique security to application security
requirements that are not practices.
fulfilled by legacy controls.
Examples and first demos of generative • Determine your corporate position on providing feedback to the applications
cybersecurity AI products and features and improve their efficacy in the long run.
appeal to many security professionals. • Identify changes in data processing and dependencies in the supply chain for
However, our conversations with CISOs security tools and require your security providers to be transparent about data usage.
have highlighted questions and challenges
• Adapt documentation and traceability processes to augment internal knowledge
related to cost and quality. One example
and avoid feeding the tools with only your insights.
of a concern: building new workflows
and dependencies relying on immature • Monitor the release of fine-tuned or specialized models that align with the relevant
technology that would require more work security use case or for more advanced teams.
in a few years. • Remember that using generative AI base models “as is” might not be adequate
for advanced security use cases.
Customer retention and growth feature heavily in the plans »Figure 1. Primary Focus of Generative AI Initiatives
of executives interested in generative AI. Thirty-eight percent Customer Experience and Retention 38%
of leaders see improving customer experience and retention as
Revenue Growth 26%
the primary purpose of initiatives to deploy applications trained
on large language models, while 26% highlight revenue growth. Cost Optimization 17%
Only 17% cite cost optimization (see Figure 1).1 Business Continuity 7%
Generative AI could help enterprises achieve all three goals by addressing rapidly changing None of the Above or Not Applicable 12%
consumer expectations. CEOs report price sensitivity as the top shift in customer behavior (e.g., Vendor or Investor)
as inflation begins to bite, and 21% of them regard AI as the leading disruptive technology.2 n = 2,544
Source: 2023 Gartner Beyond the Hype: Enterprise Impact of ChatGPT and Generative AI Webinar Polls
Executive leaders should therefore pilot promising customer-oriented use cases of tools
Note: Results of these polls should not be taken to represent all executives as the survey responses come
such as ChatGPT and other AI techniques while guarding against the biggest risks. from a population that had expressed interest in AI by attending a Gartner webinar on the subject.
Wait one year. Wait one quarter. Is that kind of software or device starting to act as a customer for what you provide?
2
How Walmart Automated Supplier Negotiations, Harvard Business Review.
to work with AI agents. They will need to understand 3
Taylor Swift | The Eras Tour Onsale Explained, Ticketmaster.
Executive leaders can position their organizations to and possibly crack the algorithms that drive a 4
LexCheck, LexCheck.
benefit from the AI-fueled machine customer trend by machine customer’s purchase behavior or after-sale 5
Nike Moves to Curb Sneaker-Buying Bots and Resale Market With
Penalties, CNBC.
taking the following steps: service demands. These employees will need a basic 6
Expedia App Integrates ChatGPT, Forbes.
• Make all your product and service information easily understanding of how the technology works; some 7
Watch DoNotPay’s AI Chatbot Renegotiate a Comcast Bill to Be $120
positions may require a data science background. Lower, PCMag.
accessible to machine customers. They may be 8
The Inside Story of ChatGPT’s Astonishing Potential, TED.
searching on 100 different variables, and you’ll need to • Alert and train customer-facing staff to spot machine 9
Lazada Unveils New eCommerce AI Chatbot LazzieChat,
customers. Machines (generative AI LLM-powered Marketing-Interactive.
provide data for all of those, depending on where they 10
Waymo Is Starting Driverless Taxi Tests in Los Angeles, Engadget.
are in the purchase process. Provide and encourage API bots) posing as humans may already be trying to 11
Baidu Launches China’s First Driverless Taxi Services in Chongqing
access, and make sure CAPTCHA and other bot-thwarting negotiate, book and buy from you through text-chat- and Wuhan in Landmark Moment for Autonomous Motoring,
South China Morning Post.
tools are not shutting out legitimate machine customer based customer service lines, and perhaps even via 12
Baidu, Pony.ai Win Permits to Offer Driverless Robotaxi Services in Beijing,
revenue. telephone calls using high-quality voice synthesis. Reuters.
13
Announcing Ford Blue™ and Ford Model e™, Ford.
• Add machine customers to your core digital commerce 14
GM’s Cruise Robotaxi Unit Drives Deeper Into the Red, Reuters.
15
Watch Elon Musk’s Full Interview With CNBC’s David Faber on Twitter,
and sales strategy. Strive to be exceptional at digital
Tesla and A.I. Advances, CNBC.
commerce. It will be the first place machine customers go 16
D. Scheibenreif and M. Raskino, “When Machines Become Customers,”
when they want to buy from you. Consider the impact of Gartner, 2023.
17
Never Run Out of Paper Again! HP Inc.
these buyers on the way you sell and provide information.
• Develop a strong commercial partnership between
sales, marketing, supply chain, IT and analytics. This
team should create one to three scenarios that explore
what happens when AI-enabled machine customers start
buying from you. The supply chain must be agile enough
to respond to unexpected demand patterns.
could handle.
© 2023 Gartner, Inc. and/or its affiliates. All rights reserved. CM_GBS_2463403 in your enterprise.
What Generative AI Means
for Your Talent Strategy
by Helen Poitevin and Pieter den Hamer
The accelerated investment in generative AI has unsurprisingly AI applications have been affecting workers for years, but this time feels different.
During the late 2010s, few executive leaders would claim publicly that layoffs were in
led to concerns about how this technology will affect jobs,
any way due to investments in AI or automation. They would usually say they identified
including those once thought impervious to automation. Some impacted staff, retrained them and shifted them to other roles. Today, however, as
roles will cease to exist, while many others will change radically, use of generative AI tools such as ChatGPT spreads rapidly, leaders are more willing
encompassing new tasks and requiring new skills. Faced with to explicitly call out AI as one of the reasons positions will disappear — whether the
a potentially historic disruption, executive leaders should shape technology is truly the culprit or not.1,2
Combining these demand and technology How AI Is Applied in Industries and Professions
factors creates a range of situations Within the Boundaries: Pushing Boundaries: Breaking Boundaries:
(see Figure 1). Within each organization, Shifting roles through automation Newly configured roles Game-changing roles.
and even within each team, multiple and augmentation of existing work through transformation and Autonomous business with
cases are likely to apply. For each patterns and tasks. augmentation; new work significantly lower labor-to-
situation, specific investments in talent patterns. revenue ratio.
development and workforce planning will Increasing/High: Scale Up: New Impact Level: Symbiosis:
2
IBM CEO Among the First Major Executives to Say They’ll Replace Jobs
the Current Boundaries the Current Boundaries With AI, Axios.
Using generative AI in this way will result in the shifting Second, using generative AI to push the boundaries of
of roles over time. Fewer people will be needed to professions and industries, or even break them, will set
complete the same amount of work. Mass layoffs driven off a race for performance. Expectations will be higher in
solely by generative AI adoption are unlikely, especially newly configured roles, and organizations won’t be able
considering that labor markets remain historically to compete without using AI. The question will be not
tight. However, people who leave jobs affected by the which tasks go to AI and which to humans, but how people
technology are less likely to be replaced. Employees can use AI creatively to reach new heights. New, highly
hired into these roles will be expected to accomplish specialized jobs will emerge where generative AI and
higher work volumes more quickly. It will also become related technologies are used creatively and strategically
increasingly challenging to find talent willing to take on to transform what teams do and what their clients expect.
jobs that automation will likely displace. This requires a rich blend of business and technological
acumen that few possess.
Executive leaders implementing generative AI in this
context should anticipate headcount reductions over For example, organizations will need executive-level
time. They will need to redesign jobs displaced or business architects who wield both types of expertise in an
disrupted by AI into smaller numbers of multiskilled entrepreneurial way. When it’s too hard to find the right fit
generalist roles that encompass a wider range of for these roles, build on experience with cross-disciplinary
capabilities and offer a more compelling employee fusion teams that design and deliver digital products and
value proposition. services. Whether this specialized role is filled by a person
with this rare mix of talents or by a fusion team, it will be
critical for enterprises that make the leap to create new
boundaries in a machine economy.
Can AI Overcome
Daniel Kahneman is professor of psychology and public
affairs emeritus at the Princeton School of Public and
International Affairs, the Eugene Higgins professor of
Flaws in Human
psychology emeritus at Princeton University and a fellow
of the Center for Rationality at the Hebrew University in
Jerusalem. He is the author of Thinking, Fast and Slow.
Decision Making?
A Q&A With Daniel Kahneman and Olivier Sibony
by Steve Shapiro
Photo courtesy
Wikimedia Commons
Table of Contents
Q&A
When presented with the same facts, What’s the biggest thing that people feel more natural, meaning more human to the human
get wrong about “noisy” decisions? user, it intentionally introduces an element of randomness
people will disagree and even draw
to the answers so that it will not feel like a machine.
different conclusions themselves from Kahneman: People are not aware of the amount of
But at a minimum, it is clearly noisy, so if you were hoping
one day to the next. noise. And they’re certainly not aware of the damage
that those models would be a substitute for human
that it does. One of the motivations for the book was
judgment, that’s going to be a problem.
This variability in decision making — what Daniel a conversation with somebody who runs a hedge fund and
Kahneman, Olivier Sibony and their co-author Cass he thought that errors cancel out. With that misperception What about algorithms in general? Can they solve
Sunstein call “noise” in their book Noise: A Flaw in Human you’re not going to take noise very seriously. When the problem of noise if they don’t act like humans?
Judgment — can create costly problems for organizations. they have disagreements, it appears to be a one off,
Disagreements don’t cancel out, but add up, they argue. but disagreement is actually the rule. People ignore Kahneman: Moving to algorithms will improve the quality
the problem. and the accuracy of judgments. Algorithms tend to beat
We should expect such inconsistency in humans — humans, or at least tie with them.
after all, we are not machines. But with machines now Has the emergence of new generative AI tools such
making decisions, can we overcome this flaw in human The main reason for human inferiority in that regard is
as ChatGPT changed how you think about noise?
psychology? And what does that mean for organizations? noise. Bias in human judgment tends to be drowned out
Sibony: One of the main reasons to use models and by the noise. When you have a system that is not noisy,
In separate conversations, Kahneman and Sibony told us algorithms instead of humans to make judgments is that biases stand out very clearly so there is something utterly
why decision making often goes wrong and whether AI they may not be unbiased every time, but at least they are nonsensical in all this talk about bias in AI. My guess is that
can provide answers. The interviews have been edited for noise free — if you ask the same question twice, you will it is going to be a rare case where the AI is more biased
length and clarity. get the same answer. Now, as we’ve all experienced with than the humans that it replaces.
ChatGPT or Bard or whatever your favorite large language
I think some of the resistance to algorithms that we see
model is, that’s not entirely true.
comes from a deep misunderstanding of what’s going on.
If you ask ChatGPT why it’s not giving you the same answer Humans are noisy and that’s why you sometimes don’t see
every time, its answer, tellingly, is that in order to look and how biased they are.
Sibony: Using algorithms in the most generic sense What about situations where you have If you can move to the mindset where your own judgment
to make a judgment is going to be less noisy than using competing AI models or algorithms? is one of the inputs, another human is another input,
human judgment. That point obviously still stands. one AI model is another input and another AI model is yet
Kahneman: It’s going to probably produce less noise
ChatGPT is the exception, not the rule, and any simple another input, then you’ve recognized noise and you’re
than humans would and that’s how you’ve got to look
algorithm is noise free. But since we have a preference for doing something about it. You’re not giving precedence
at it. You’ve got to look at the alternative. And of course,
sophisticated and user-friendly algorithms over basic ones to a single point of view, which happens to be yours.
now people find it shocking when different models
that apply simple rules, it’s going to be more and more
disagree. But if they knew how much people disagree, So far it’s slightly theoretical because I haven’t seen many
of a problem.
they would be less shocked. situations where people are using more than one AI model
Should we use algorithms to try to solve the same problem and wondering which device
Of course there will be noise with multiple algorithms;
to eliminate noise completely? to follow. But more and more this is going to be the case.
that’s bound to happen. But there is something else
Kahneman: I think the moment you start becoming that happens when there are competing algorithms. The more inputs we have access to from various
aware of noise you know that you cannot eliminate it. Because typically algorithms are based on a lot of data, models, the more we recognize that under uncertainty,
And you don’t want to reduce it to zero because you want it’s also possible to reconcile them and reduce the noise a multiplicity of points of view is actually a good thing.
individuals to exercise their judgment. You do not want in a way that is really not possible when it’s people
to completely turn them into machines. generating noise.
There are costs to noise, but there are also costs to Sibony: If you’ve got several models, you will have several
reducing noise and you want to reduce those costs as points of view, just like when you have several humans.
much as possible. You want people to feel that they’re There is an advantage though here, which is that when you
expressing themselves. People need to see AI and have one point of view and it’s yours, that judgment seems
algorithms as noise reduction tools that help them rather to you a lot more correct than the judgment of another
than as bureaucratic constraints on the way that they human. When you’re looking at three different algorithms,
operate. You can’t get to zero noise if you’re using human you will tend to assume that each of them is to be taken
judgment, but zero is not the best possible outcome. with a grain of salt, and that’s a better attitude.
Machine learning (ML) can make Alessandro Marchesano, head of FP&A at Switzerland- This ML pilot helped them gather unbiased data results and
based ABB Electrification (ABB EL), a business unit of ABB, gain executive leadership buy-in to build on the project’s
financial planning more efficient and recognized that making headway with ML in financial success. As a result, they plan to scale the use of ML to
accurate, serving as the basis for planning required his team to take the time to learn how the other financial planning activities, such as on-demand
generating meaningful forecasts that technology works and understand what it does best. The scenario analysis and planning assumption updates.
help executive leaders prepare for future other vital step was to identify the specific role humans will The team is also well-positioned to expand ML to other
play in building, training and governing ML models. areas of planning and, eventually, other areas of ABB.
disruptions. Yet, when FP&A leaders rush
to replace traditional forecasting with ABB EL tested a human-machine learning loop method that
empowered the FP&A team to:
this technology, their underdeveloped “Our forecasting transformation triggered deep
1. Integrate complex external drivers into ML models. discussions on key business drivers, is as accurate
models — trained on poor historical data
— can lead to untested algorithms that 2. Test, iterate and refine ML-based drivers. as our traditional bottom-up process, and is much
3. Refine the algorithms regularly to maintain performance. faster,” said Marchesano.
frustrate progress.
Having recognized that the first set of drivers chosen for OUTPUT:
INPUT: Source Use both AI/ML and Drivers validated
each algorithm would always need fine-tuning, ABB EL’s Determine if drivers Data scientists
drivers for each traditional statistical by ML
FP&A team quickly set about substantiating the ML results are statistically use validated
ML model from models to test
to make sure they reflected financial performance. To relevant to drivers to
business and and verify business- Drivers invalidated
performance create ML
achieve this goal, FP&A created a three-step process to finance experts sourced drivers by ML algorithms
test the relevance of the drivers and measure their effect
on outcomes (see Figure 2).
The team:
1. Sourced external drivers for each complex business
area from business and finance experts.
Source: Adapted From ABB
2. Used ML and statistical models to validate those
business-sourced findings — and, if the FP&A team
hit a dud, business analysts went back to the first step
and continued to iterate until they uncovered the most
effective drivers.
3. Created algorithms with data science teams using
the validated drivers from the second step.
Consider these alternatives: 2. Central — Send the money saved to an enterprisewide pool for reinvestments
designed to grow revenue based on organizational priorities.
1. Voluntary reduction in hours — Many employees may willingly work fewer hours
at commensurately lower pay. For instance, the finance function at a U.S. medical equipment company offered business
leaders concrete benefits based on central winbacks. They could submit proposals for
2. Internal redeployment — Even where headcount must decrease, some may transfer
what they would do with the money if they could reclaim half their savings.
to other parts of the business where their skills are in demand.
3. Sabbatical — An unpaid break can give employees the opportunity to pursue The result? Different business units competed to unearth more cost reductions
a professional interest that still contributes back to the company. — increasing the amount available to fuel growth.
4. Executive compensation cuts — Lower base pay temporarily while preserving They can use these funds as a cushion against budget overruns, without delaying other
long-term incentives. projects. Plus, it will bake the concept of uncovering resources into a “business as usual”
practice instead of a one-time action.
None of these options will make everyone happy, but they can keep a company
in position to recover faster when better days arrive.
— Vaughan Archer — Roma Kaur
— Shivendra Singh and Amrita Puniani — Kayla Velnoskey, Iga Pilewska and Kate McLaren-Poole
3. Courage — Defend your choice to do less, pushing back on urgent but unimportant Some examples: Internalize customer perspective while developing ideas and solutions.
requests. All major investments should be able to justify themselves based on Lead a topic-specific brainstorming session to generate unique insights. Work with
measurable contribution to future goals, not those in the past. business partners to agree on transformational targets and update them quarterly about
whether project teams are taking enough risk in their work.
— Sharon Cantor Ceurvorst — Amisha Ajay
1. Support effective cyber risk assessment with risk acceptance, escalation, exemption 1. Using real company datasets for hackathon activities quickly connected visualization
procedures and a representative steering committee that includes business unit leaders to participants’ daily responsibilities. It also made the outputs immediately useful for
to enable shared decision making. their job.
2. Make security policy more flexible by allowing choice among technology options that 2. Training high-potential talent as peer guides provided support where facilitators
achieve the same objectives. Additionally, rationalize the number of security policies, couldn’t. The fact that these colleagues had only recently received their own
while co-creating new policies and guidance with the business. introduction to the tool reduced skepticism in the ranks.
3. Go beyond traditional security awareness training with tools and playbooks that build 3. Inviting top executives to evaluate the outputs offered employees exposure to senior
business technologists’ independent cyber judgment. leadership, and motivated presenters to impress when they discussed their work.
Management benefited, too, by witnessing the technology’s impact and potential
4. Clearly communicate to the enterprise that the primary role of the CISO is enabling
firsthand, acquiring more ideas for innovation, and gaining an understanding of the
digital innovation — rather than enforcement of security controls.
business case for additional investment.
— Tom Scholtz — Jose Rosario
Position your organization for Achieve digital ambitions to thrive through upheaval business for the biggest impact and the best outcomes.
Webinar Webinar
Reduce Information Overload to Boost Your Don’t Own Innovation; Enable It Across
Business Strategy and Retain Employees the Business
Identify and mitigate information overload in Establish competitive advantage by enabling your
your organization. organization to innovate autonomously.
Already a client?
Watch Now Watch Now
Get access to even more resources
in your client portal. Log In
© 2023 Gartner, Inc. and/or its affiliates. All rights reserved. CM_GBS_2510939