01) Integrating Intuition & AI in DM
01) Integrating Intuition & AI in DM
01) Integrating Intuition & AI in DM
ScienceDirect
w w w. j o u r n a l s . e l s e v i e r. c o m / b u s i n e s s - h o r i z o n s
College of Business, Clayton State University, 2000 Clayton State Boulevard, Morrow,
GA 30260-0285, U.S.A.
https://doi.org/10.1016/j.bushor.2021.02.008
0007-6813/ª 2021 Kelley School of Business, Indiana University. Published by Elsevier Inc. All rights reserved.
426 V.U. Vincent
experience in the decision-making domain, an better than humans in solving such decomposable
expert is instead able to recognize the correct tasks.
course of action without much deliberation (Klein In contrast, intuition is effective for complex
et al., 1986). Consequently, intuition is now and ill-structured tasks that are not as easily
considered an effective mode of decision-making decomposed (e.g., Dane et al., 2012; Dijksterhuis,
when the decision maker is a domain expert and 2004). Unlike decomposable tasks, these types of
when the task is ill-structured (Dane & Pratt, tasks are abstract and are difficult to solve using
2007). math and logical inference. As a result, for ill-
structured tasks, also referred to as judgmental
tasks (Hammond et al., 1987), it is difficult to
3. Expert intuition
derive a solution using a purely analytical process.
Therefore, for such tasks, intuition may be more
As argued by supporters of the naturalistic
effective, as the intuitive process allows the indi-
decision-making view and as evinced by existing
vidual to make holistic judgments by considering
research (e.g., Dane et al., 2012), expertise in the
the different aspects of the task that cannot be
decision-making domain is a necessary condition
combined using an analytical method. In fact,
for the effectiveness of intuition. Domain exper-
managers generally prefer intuition for unstruc-
tise refers to the amount of knowledge and expe-
tured tasks where there is no clear objective
rience the decision maker has in a particular field.
method to solve the problem (Hodgkinson et al.,
For example, a chess grand master is a domain
2009).
expert in the game of chess. Domain expertise has
been found to lead to effective intuitive decisions
(e.g., Chase & Simon, 1973; Dane et al., 2012;
5. AI in organizational decision-making
Dijkstra et al., 2013; Hammond et al., 1987). But
the effectiveness of an expert’s intuitive judgment
AI is defined as “a system’s ability to interpret
is restricted to their domain of expertise (Dane &
external data correctly, to learn from such data,
Pratt, 2007). Therefore, an individual who is an
and to use those learnings to achieve specific goals
expert in one domain may not be able to make an
and tasks through flexible adaptation” (Kaplan &
effective intuitive decision in a completely
Haenlein, 2019, p. 15). Put more simply, AI is
different domain. Relatedly, the intuitive judg-
intelligent machines that can think, learn, and
ments of novices are ineffective owing to their
make decisions accordingly. A distinguishing
lack of domain-relevant knowledge (Dane et al.,
feature of AI systems from other computerized
2012).
systems is that, similar to the human intellect,
these nonhuman entities can not only absorb and
4. Ill-structured tasks process information but also learn and update
their decisions based on new information. In
In addition to domain expertise, the other neces- essence, these artificial systems are made to
sary condition for the effectiveness of intuition mimic the processes of the human brain, hence the
depends on task characteristics. When dealing term “artificial intelligence.”
with problems that are conducive to analytical The concept of AI is believed to have originated
solutions, analytical decision-making may be best. in the mid-20th century (Haenlein & Kaplan, 2019)
However, when dealing with problems that are but has recently gained much prominence in both
ambiguous and ill-defined, intuitive decision- academic and industry circles. A historic event
making may well be a better option (Denhardt & that marked the emergence of AI as a game
Dugan, 1978; Friedman et al., 1985; Hammond changer in the modern era is the much-publicized
et al., 1987; Hogarth, 2002). Tasks that are IBM Watson’s performance in Jeopardy (Jarrahi,
conducive to analytical solutions, also referred to 2018). In another landmark event, AlphaGo, an AI
as intellective tasks (Dane & Pratt, 2009), are program developed by Google, defeated the world
highly decomposable (Hammond et al., 1987) and champion in go, which is a board game that is
can be solved using reason or mathematical for- argued to be substantially more complex than
mulas. In such decomposable tasks, an individual chess (Haenlein & Kaplan, 2019).
can analytically solve the problem and articulate The extreme capabilities of AI are rooted in
or illustrate the steps taken to derive the solution. advanced computational programs that are
Given these characteristics, intuition may not be developed through artificial neural networks and
effective for decomposable tasks (Dane et al., deep learning models. Artificial neural networks
2012). Analytical tools such as AI appear to be are adaptive computational models that use
428 V.U. Vincent
numerous algorithms to make complex nonlinear managers often have to make important decisions
associations within substantial datasets (Miller & based on imperfect information.
Brown, 2018). Deep learning is an advanced form
of artificial neural network that uses multiple 6. AI concerns
layers of processing to learn hierarchical repre-
sentations of data (Young et al., 2018). For
The proliferation of AI in organizational processes
example, in image recognition, by using prior data
has simultaneously raised several concerns. For
to learn the characteristics of a particular image
example, various concerns are relevant in the use
type, a program based on an artificial neural
of AI in the field of human resources (HR). These
network will be able to recognize and classify a
include the high complexity of HR problems, small
new image (e.g., a human face). A deep learning
data sets, ethical and legal constraints, and
program, on the other hand, can extract more in-
employee reactions to AI (Tambe et al., 2019). The
depth information that enables a deeper under-
complexity of HR largely revolves around the dif-
standing of the image (e.g., determining the
ficulty in assessing what constitutes a good
emotional state of an individual through recog-
employee and the interconnectedness of jobs that
nizing facial expression patterns).
makes it difficult to separate an individual from
AI has fundamentally changed not only organi-
group performance. For example, even with AI
zational processes but also the roles humans play
technology, it is difficult to accurately measure
in organizations. Over the last few years, capa-
intangible attributes, such as an individual’s
bilities of AI have progressed from repetitive tasks
contribution to positive team morale.
to analytical tasks and now to thinking tasks, tak-
In terms of the small data set issue, AI is reliant
ing on more of the responsibilities that previously
on extensive data input to facilitate machine
were assigned to human intelligence. As a conse-
learning. Nevertheless, there may not always be
quence, humans are increasingly assigned to
enough historical data to allow AI to make accu-
managing the interpersonal aspects of a business
rate predictions. For example, past employee data
that AI has not yet mastered (e.g., communicating
may not be available for a job role that is new to
with internal and external stakeholders, conflict
the organization. Ethical and legal considerations
resolution), thus giving rise to the “feeling-econ-
are typically the result of privacy issues sur-
omy,” in which most human jobs may become
rounding the increased gathering, storing, and
primarily relational (Huang et al., 2019).
usage of employee data. Lastly, due to the skep-
AI is used for various types of organizational
ticism about decisions made by algorithms
tasks. For example, recent developments have
(Dietvorst et al., 2018), employees may have a
seen a surge of AI in predictive tasks, such as
negative reaction toward AI-based decisions,
forecasting weather based on past weather pat-
especially if those decisions have an adverse
terns (Agrawal et al., 2019). This advancement in
impact on the employees (Tambe et al., 2019).
predictive technology certainly aids human
The challenges of AI assimilation to organiza-
decision-making, as decision makers are now bet-
tional decision-making are not unique to the HR
ter able to compare the potential outcomes and
function. Many organizational problems, especially
risks associated with decision alternatives.
those involving interpersonal issues, are intricate,
Furthermore, and marking a significant break-
with nuances that are not readily assessed even
through in technology, there is evidence that AI
with advanced computing. Lack of relevant his-
can even be effective in environments that have
torical data to guide future decisions, ethical and
imperfect information, the type of environment
legal considerations, and negative employee re-
generally prevalent in an organizational decision-
actions to decisions made by computers rather
making context. For example, DeepStack, an al-
than humans are all concerns that need to be
gorithm developed for imperfect information set-
overcome to use AI in organizational settings
tings, uses deep learning methods to derive a
effectively.
judgment similar to human intuition to estimate
the value of cards held by opponents in games of
poker. By doing so, DeepStack has been able to 7. Similar concerns between AI and
defeat professional poker players at a statistically intuitive decision-making
significant level (Morav
cı́k et al., 2017). As poker is
the epitome of a context with imperfect and Some of the issues of decision-making using AI are
asymmetric information, the success of DeepStack not dissimilar to concerns with human intuitive
is an indication of how AI can be used in dynamic decision-making. For one, human intuition often is
and competitive business environments, where characterized as prone to individual biases (Akinci
Intuition-AI in organizational decision-making 429
Table 1. A case for integrating expert intuition and AI: Real-world examples
Risks of AI overdependence At its outset, Amazon’s free same-day delivery service received backlash for
without human collaboration discriminating against largely minority neighborhoods. Amazon used an
algorithmic system to decide in which areas to offer this service. The company
claimed that race was not a consideration in the calculations and that the
decision was based on a cost-benefit analysis (e.g., the number of Amazon
Prime members in the ZIP code, distance from the nearest Amazon warehouse).
But the areas that were excluded from this service had a majority African
A m e r i c a n p o p u l a t i o n , l i k e l y b e c a u s e m o s t Pr i m e m e m b e r s l i v e d i n
predominately Caucasian neighborhoods. As a result, Amazon was heavily
criticized and later expanded the service to some of the previously excluded
areas (Ingold & Soper, 2016).
In a study conducted by ProPublica in Broward County, Florida, the Correctional
Offender Management Profiling for Alternative Sanctions (COMPAS), an
algorithm-based computer system used by the U.S courts to estimate the
likelihood of recidivism, was found to be biased against African American
defendants. Although skin color was not a factor considered in the calculations,
other factors, such as the crime rate in the defendant’s neighborhood, may
have inadvertently biased the system against those of a certain race (Angwin
et al., 2016). Although Northpointe (the company that developed the software)
disputed the ProPublica analysis, the findings highlight the grave consequences
if decisions are made solely based on the risk scores derived from such
computer systems.
Watson for Oncology, an AI-based program developed by IBM to generate
treatment recommendations for cancer patients, has come under criticism for
not delivering as promised. The program was developed to improve clinical
decision-making by analyzing vast amounts of medical literature and patient
health records. Although Watson learned how to scan published clinical studies
and to gather statistics about the main outcomes, the program could not be
taught how to extract information the way a physician does. As a result, the
recommendations were not as useful, which has hindered the adoption of the
program in clinical care (Strickland, 2019). The inability of an advanced AI
system such as Watson to match the inferring capabilities of a human expert
stresses the importance of combining human expertise with AI.
Successful AI and human An AI-based recruiting system used by Unilever attempted to combine human and
collaboration AI capabilities to increase the efficiency and effectiveness of the company’s
recruiting process. The AI-based system was used in the first two rounds of
selection to determine candidate traits (e.g., risk aversion), capabilities, and
suitability to specific positions. Depending on the AI’s judgment, the best
candidates were then invited for in-person interviews, where human decision
makers made the final hiring decisions. The collaboration resulted in
broadening the scale of Unilever’s recruiting, with increases in both the overall
number of applications as well as the socioeconomic diversity of applicants.
Also, recruiting speed increased from 4 months to 4 weeks, while the time
recruiters spent reviewing applications decreased by 75% (Wilson & Daugherty,
2018).
As part of a Defense Advanced Research Projects Agency (DARPA) program,
researchers at the Harvard Medical School collaborated with an AI-based system
to identify the reasons why a powerful melanoma drug lost its effectiveness in
helping patients after a few months. The researchers followed an iterative
process in which they inputted their ideas about interactions among proteins
within cells to the computer system that, in turn, considered their thinking and
used advanced calculation techniques to generate a set of results. The
researchers analyzed these results and entered new ideas into the system, which
then responded with a new round of analysis and results. In essence, the
researchers engaged in a type of brainstorming session with AI to evaluate and
advance ideas. In doing so, they improved the decision-making process by
incorporating not only these impressive AI capabilities but also expert human
intuition (Prabhakar, 2017).
(continued on next page)
430 V.U. Vincent
Table 1. (continued )
In an attempt to build a more efficient and responsive supply chain, Intel is
seeking to use AI to assess and manage its supplier relationships. Some of the
functions of the AI system include monitoring supply disruptions and
reputational risks as well as finding potential new suppliers. Although the tech
giant expects most of the analysis to be seamless, the company understands
that some of the supplier evaluations and sourcing decisions will be
complicated and will require human involvement in the decision-making
process. Therefore, the system is reliant on both humans and AI (Saenz et al.,
2020).
& Sadler-Smith, 2013Akinci & Sadler-Smith, 2013). value of integrating intuition and expertise with
Even though AI is not influenced by emotion and algorithmic capabilities has been found to lead to
therefore is not biased in a human sense, an AI can application failures and missed learning opportu-
only base its decisions on the data to which it has nities for organizations (Saenz et al., 2020; see
been exposed. There is therefore an indirect bias Table 1 for examples of some risks associated with
in AI decision-making because it does not account AI overdependence without human collaboration).
for information that was not entered into the Consequently, to improve organizational
system. For example, Amazon aborted an AI-based decision-making, it seems prudent to combine
recruiting model that was developed to identify both expert intuition and AI in making decisions.
promising candidates because it was biased against For example, in a research study involving 1,500
women. As the computer model was trained on companies, Wilson and Daugherty (2018) found
resumes received by the company in the preceding that organizations achieve the most significant
10 years, the model automatically favored male performance improvements when humans and AI
applicants owing to the male dominance in the collaborate (see Table 1 for some examples of
tech industry during that time (Dastin, 2018). As a successful AI and human collaboration). The
result, the system discriminated against female question is: How can decision makers combine
candidates. their intuitive expertise with AI to make effective
Another AI issue that is similar to human intui- decisions in today’s complex and ill-defined orga-
tion is explainability. Because intuition is an un- nizational environments?
conscious and automatic process (Dane & Pratt,
2007), humans often struggle to explain how they
8. Integrating AI and intuition
came to an intuitive decision. They are unable to
articulate the decision-making process or to
Scholars have suggested that one way to enhance
clearly explain the factors they considered in
the collaboration between human intuition and AI is
making the decision. Likewise, a decision derived
for each to focus on different types of decision
by a multilayered machine-learning algorithm
tasks. Owing to its high analytic capabilities, AI can
based on weighted combinations of a myriad of
focus on complex analytical tasks, whereas humans
factors is difficult to explain (Tambe et al., 2019).
can focus on tasks with uncertainty and equivo-
This lack of explainability not only gives rise to
cality (i.e., ambiguous decision situations that can
skepticism and fairness concerns but also hinders
lead to divergent interpretations). However, ac-
organizational learning and development, as the
cording to Jarrahi (2018), even the most complex
processes involved in making decisions remain
analytical tasks may have elements of uncertainty
mysterious.
and equivocality, which highlights the importance
Overall, an advantage of AI is that it is objective
of human involvement in such decisions. Therefore,
and impartial since it is unhindered by social con-
it may not be optimal to assign a decision-making
siderations. But the lack of social and emotional
mode solely based on the type of task.
intelligence can also be a disadvantage, as human
In this article, I argue that the effectiveness of
emotions play a pivotal role in organizational
collaboration between human intuition and AI
success or failure. On the other hand, human
for decision-making is dependent on two main
intuition can be flawed because of biases and the
criteria: the expertise of the human decision
human inability to process large amounts of in-
maker (an individual characteristic) and the
formation. Therefore, relying on either technique
properties of the task (a task-related character-
alone seems less than optimal for organizational
istic). As previously noted, intuition is only effec-
decision-making. In fact, underestimating the
tive when the decision maker is a domain expert
Intuition-AI in organizational decision-making 431
and when the task is ill-structured. Therefore, making process, let us consider a fourth-down
human intuition in combination with AI will only be conversion attempt in the game of American
useful when both of these conditions are met. If football. In this ill-structured decision-making
the decision maker is a novice, it may be prudent context (owing to outcome uncertainty), the
to delegate decision-making authority to AI offense (the team with the ball) has one last
regardless of whether the task is structured or not, attempt to advance the ball beyond the first-down
since the novice will not have the capacity to marker that would allow the offense to keep the
supplement or correct a decision derived through ball. If the offense does not progress beyond the
extensive computation. Likewise, if the task is first-down marker, the other team will get the ball
structured and an accurate decision can be and gain field position. As fans of American foot-
derived through logical analysis, the decision ball know, this is usually a pivotal point in a game
should be delegated to AI because not even expert that could well determine the outcome.
humans can match the speed and analytic capa- When faced with this problem, the coach of the
bilities of computer systems. offense (i.e., a domain expert) has two main de-
Assuming the two necessary conditions are met, cision alternatives (i.e., limited decision alterna-
how should decisions be made? Figure 1 presents a tives: to go for the fourth-down conversion or to
decision-making model with two different kick the ball. The coach could intuitively decide to
sequential approaches to combining intuition and go for the conversion and then check with their
AI. The first approach, termed the confirmatory analytical team to assess the probability of suc-
method, is when the decision maker first makes an cess, which often is derived through AI-based
intuitive judgment and then uses AI to either complex computational techniques using past
confirm or change the initial decision. The second data. Major-league sports teams across the world
approach, termed the exploratory method, is often employ AI-powered advanced sports ana-
when the decision maker first uses AI to identify lytics software to aid decision-making. If the ana-
potential decision options and then decides on the lytics confirm the coach’s intuitive decision, then
basis of intuition. the decision is implemented. However, if the an-
alytics contradict the coach’s decision or are
8.1. The confirmatory method inconclusive, then the coach must decide to go
either with their gut or with the analytical data.
In the confirmatory method, the expert decision This decision usually must be made within a matter
maker first makes an intuitive judgment relating to of seconds. Since there is no time to reevaluate
a particular task or business problem. Through this options, the coach’s best choice is to go with their
method, the decision maker identifies one or more gut feeling. Even though the AI algorithms likely
alternative approaches to tackle the issue at hand. considered thousands of similar decision situations
Then, AI methods are used to evaluate the ap- and a multitude of other factors, the coach has a
proaches identified by the decision maker. If the unique advantage because they are more familiar
decision maker’s intuitive decision is confirmed by with the current game situation. For example, the
AI, then action should be taken to implement that coach can sense the confidence and the energy
decision. But if the AI analysis contradicts the level of the players, which the analytics would not
intuitive decision, or if the results are inconclusive have considered.
(i.e., if the AI does not confirm or contradict the
intuitive decision), then two courses of action can 8.2. The exploratory method
be taken depending on the time sensitivity of the
business problem. Ideally, if more time is available In the exploratory method, AI is first used to
to make a decision, the decision maker should identify a set of decision alternatives, which are
consider other options and reevaluate those de- then evaluated by an expert decision maker. This
cisions using AI until a satisfactory decision is method is best for situations that have many de-
reached. If under extreme time pressure, then the cision alternatives, as AI can narrow the decision
expert decision maker should go with their intui- options and provide a few possibilities for the de-
tive decision, as intuitive decisions have been cision maker to choose from. This reduces the
found to outperform analytical methods for mental strain on the decision maker and minimizes
ill-structured tasks when the decision maker is an decision flaws caused by an individual’s inability to
expert (Dane et al., 2012). process large amounts of data. If the human de-
The confirmatory method is most useful in sit- cision maker’s intuition confirms the decision
uations in which there are a limited number of derived through AI, then the decision is imple-
decision alternatives. To illustrate this decision- mented. If not, as with the confirmatory method,
432 V.U. Vincent
the decision process should be restarted until a for a complex job (e.g., executive-level positions)
satisfactory decision is arrived at. However, if can be an ill-structured task, as it is often difficult
under extreme time pressure, then the expert to set accurate evaluation standards for hiring
decision maker’s intuitive judgment should take owing to the difficulty in determining which char-
precedence over AI for the reasons previously acteristics lead to successful job performance
noted. (Chen et al., 2008). In this situation, AI can be used
An example of the use of this method would be to narrow down the candidate pool to a few can-
selecting an employee for a complex job. Hiring didates, and then expert judgment can be used to
Intuition-AI in organizational decision-making 433
select the best candidate. In another example, a options should be reevaluated. In certain situa-
medical doctor can use AI to diagnose a medical tions, it may be useful to follow a different deci-
condition and then rely on their expert judgment sion path than what was initially used. For
to decide on the best course of treatment. example, because of the uncertain and evolving
nature of many ill-structured problems, a business
problem that was initially thought to have a few
9. How does the model augment decision alternatives may have other decision op-
organizational decision-making? tions that the decision maker had not considered.
Therefore, if the confirmatory method did not
This model extends existing frameworks that produce a satisfactory result (i.e., a human deci-
attempted to combine humans and AI in organi- sion that was confirmed by AI), then using the
zational decision-making. For example, Shrestha exploratory method may unearth some options
et al. (2019) proposed a framework that includes that were previously overlooked.
an AI-to-human (AI decisions as input to human
decision-making) and human-to-AI (human de-
cisions as input to AI decision-making) sequential 10. Implementing the model for
decision-making process. An AI-to-human decision- organizational decision-making
making sequence is suitable when there are many
decision alternatives and the specificity of the As previously noted, the effectiveness of this
decision search space is high in the first phase and model is predicated upon two key factors: the
low in the second phase. The authors argued that expertise of the decision maker and the presence
AI algorithms are effective in a decision search of ill-structured tasks. Therefore, when using this
space that can be well specified (i.e., well struc- model to make organizational decisions, it is
tured). Humans, on the other hand, can accom- important to ensure that both conditions are met.
modate a loosely defined decision space. A human- With regard to the decision maker’s expertise,
to-AI sequence is best when there are a small research suggests that it may take 10 years of
number of decision alternatives and the decision intense experience for an individual to achieve
search specificity is low in the first phase and high expertise in a particular domain (Dane & Pratt,
in the second phase. In both sequences, decision 2007). However, there is some indication that in-
speed and replicability of the process are dividuals may be able to make effective intuitive
adversely affected by human involvement. decisions with a moderate level of experience
The hybrid process proposed by Shrestha et al. (e.g., Dane et al., 2012). The time it takes to
(2019) emphasizes important considerations for achieve expertise may vary based on the profes-
managers in decision situations that have a combi- sion and should not be measured by years of
nation of well-structured and ill-structured decision experience alone; both the breadth and depth of
environments. AI is used for the structured compo- knowledge should also be considered.
nent, and humans are used for the unstructured Regarding the structure of the problem, if a
portion. But their framework does not propose a precise solution can be derived through analytical
specific strategy for predominantly ill-structured processes such as computation, then such methods
decision environments, a type of environment that should be used to resolve the issue. But when de-
is common in today’s novel and dynamic business cision cues are vague and there is ambiguity sur-
settings. To this end, the model presented in this rounding the best course of action, then the model
article outlines a process to make decisions in ill- presented in this article guides an expert decision
structured decision contexts. Furthermore, the maker, in combination with AI, to make effective
present model emphasizes the role of expert intui- decisions. In essence, the model is not necessary
tion, a factor that was not considered by Shrestha for programmed decisions (i.e., recurring decisions
et al. (2019). As previously noted, both task- where decision rules have been established) but
related and decision-maker-related factors must will be useful for nonprogrammed decisions (i.e.,
be considered when integrating humans and AI. infrequent decision scenarios where decision rules
Lastly, when the appropriate conditions are met, have not been established). Importantly, the
the present model provides guidance on how to counterbalancing effect of having both humans
make decisions when humans and AI disagree on the and AI in the decision-making process minimizes
best course of action. the risk of making a poor decision.
It is important to note the iterative nature of The framework can be useful for all levels of
this model. Barring extreme time pressure, if organizational decision-making. As depicted in
humans and AI disagree on a decision, the decision Figure 2, the use of the model will vary depending
434 V.U. Vincent
on the decision level. At the lower level (i.e., measures that could be used for this purpose.
operational level), most decisions are routine and Using a variety of metrics would allow a more
typically fall into the programmed decision cate- comprehensive assessment of the model.
gory. As such, these types of decision environ-
ments are often well-structured, where decisions
are arrived at through well-established decision 11. Implications for managers
rules. Since AI-based systems will be more efficient
than humans in making such decisions, the inte- In this article, I have discussed important aspects
grative intuition-AI decision-making model may of intuitive and AI-based decision-making.
not be frequently required at this level. At the Although both forms of decision-making have
tactical level, where medium-term decisions are unique advantages, each method also has weak-
made to achieve strategic goals, the model be- nesses that restrict the quality of decisions derived
comes more useful as the decision environment through one method alone. Therefore, instead of
becomes less structured and managers must pitting intuition against AI and trying to identify
incorporate their expert judgment to make de- which technique is better, the most prudent
cisions. At the strategic level, senior managers course of action is to identify the optimal way to
make complex decisions in often ill-structured combine these two methods to increase overall
decision environments. Strategic decisions, such decision effectiveness. To this end, this article
as which market to enter or which business to presents a model that specifies a decision process
purchase, typically are marked by uncertainty and in which both intuition and AI can be used in
ambiguity. Therefore, at this level, the model can combination. The model is based on the premise
be more frequently used to make decisions. that combining intuition and AI will only be
The usefulness of the model should be assessed effective when two conditions are metdthat is,
for both efficiency and effectiveness using a when the task is ill-structured and when the de-
combination of quantitative and qualitative mea- cision maker is a domain expert. If either of these
sures. Table 2 presents some examples of such conditions is not met, it will be best to use AI.
Intuition-AI in organizational decision-making 435
Frequency
How often is the model used for decision-making?
Effectiveness Success and error rate Buy-in
How often are the right and wrong decisions made using Do others agree with and support
the model? decisions made using the model?
The use of one of the two decision methods Finding ways for humans and AI to work together
specified in this article (i.e., the confirmatory will make humans feel less threatened by AI and
method or the exploratory method) will primarily more willing to work with it. While there is little
depend on the number of decision alternatives doubt that the future of human work is changing
available in the decision context. When the deci- and will be significantly reshaped by technological
sion alternatives are limited, the confirmatory advances such as AI, human input will still play a
method enables the decision maker to make a pivotal role in organizational decision-making.
quick decision based on their expert intuition, and Therefore, the more collaboration that can be
then to use AI to evaluate the choice. In this cultivated between human workers and AI, the
method, AI functions as a check on expert intuition more useful AI will be in advancing organizational
and reduces the effect of decision maker bias, processes.
which is a major flaw attributed to intuitive deci- At the same time, organizational decision
sion-making. makers should be cautioned regarding the down-
Conversely, when there are many decision al- sides of overreliance on AI for decision-making. For
ternatives, the exploratory method narrows the example, consider the impact on medical practice,
decision alternatives with the use of AI, and then a profession that has seen a significant uptake of
the decision maker uses their expert intuition to AI. Some of the concerns of AI integration in the
select the best course of action. In this method, AI field include decrease in physician-patient
reduces human decision flaws that are attributed engagement, reduction of physician compensa-
to information overload. In both methods, tion caused by a decrease in relative value, and
combining AI with human decision-making reduces the risk of skill erosion in diagnostic expertise and
the vulnerabilities of AI, such as interpersonal and clinical judgment (Miller & Brown, 2018). Similar
ethical considerations, negative employee re- concerns surrounding AI abound in other industries
actions to decisions made by algorithms rather as well. Therefore, it is critical to strive to find the
than humans, and unique events for which there right balance between the use of AI and human
are insufficient past data to train algorithms to skills in organizational processes.
predict future events effectively. Overall, the two AI can be effective in improving group decision-
decision approaches presented in the article making as well. In the past, group decision-making
attempt to maximize decision quality while mini- was one strategy often prescribed to reduce indi-
mizing the inherent weaknesses of each technique vidual biases in intuitive decision-making (Miles &
alone. Sadler-Smith, 2014). In some cases, group de-
In addition to improving decision quality, on a cisions have even been found to outperform AI-
larger scale, combining AI and human decision- based decisions (Metcalf et al., 2019). But
making is a step toward easing the growing societal various factors, such as information exchange
concern that machines will take over human jobs. barriers, often limit the effectiveness of group
436 V.U. Vincent
decisions (Dennis, 1996). “Artificial swarm intelli- the impact of AI on the decision-making process but
gence” (Metcalf et al., 2019) is an AI-based tech- also how best artificial and human intelligence can
nique that could potentially alleviate not only the be combined to optimize organizational perfor-
human deficiencies in the group decision-making mance. As a step in this direction, the model pre-
process but also some key limitations of AI, such sented in this article specifies two decision-making
as its lack of tacit knowledge and its reliance on processes to combine AI and human intuition,
historical data. This novel technique enables depending on the properties of the decision envi-
geographically dispersed human groups to work ronment. As a next step, this model should be
together in real time, using a dynamic platform, to empirically tested. There may be other organiza-
navigate a decision space collectively and to tional or individual factors that may affect AI-
converge on a decision (Rosenberg, 2016). intuition collaboration. For example, will younger,
A limitation of the decision-making model more tech-savvy managers be more willing to use AI
introduced in this article is that it may not readily in combination with their intuition than older
address a common flaw attributed to both human managers, who are more used to relying solely on
intuition and AI, which is the lack of explainability. their intuition? Will younger managers rely too much
The inability to explain the decision-making pro- on AI and not enough on their judgment?
cess adequately invites concerns over justice and Another important area of research is to better
fairness. For example, in HR, hiring and firing de- understand the ethical implications of AI in orga-
cisions usually have a significant impact on in- nizational decision-making. How will AI respond to
dividuals and communities, which raises concerns ethical dilemmas? Consider the ethical dilemma
of ethical practice, fairness, and procedural and illustrated by the familiar runaway trolley experi-
distributive justice (Tambe et al., 2019). These ment, a binary decision context in which one
concerns will intensify if the decision-making alternative leads to the death of many and the
process cannot be clearly described. Also, the other choice leads to the death of one. Will AI
lack of explainability significantly hinders organi- sacrifice the many for the benefit of one? Or will AI
zational learning and development. When the save the many by sacrificing the one? Imagine an
decision-making process cannot be explicitly automated car with one passenger (the owner of
stated, the process cannot be developed as a the vehicle) driving on a cliffside road. Suppose
prototype or a decision aid for future decision- the car has a malfunction, and the only options are
making. We can hope that developments in either to drive over the cliff, which will most
explainable artificial intelligence (Guidotti et al., certainly kill the passenger, or to drive into a
2018) will help to alleviate this issue, but organi- crowd of five, likely killing all of them but saving
zational decision makers need to be aware of the the passenger from certain death. How will the AI-
consequences of this limitation. based automated vehicle react? Will it be loyal to
This model may not be as useful in completely the owner and save the owner’s life at any cost? Or
novel business situations. This is because both will it aim to do the greater good (i.e., to minimize
human intuition and AI draw on existing knowledge loss of life) by sacrificing the car owner? Many
to form a judgment: intuition draws on experience organizational decisions can affect the lives of
and AI draws on data input. When no prior stakeholders and even the larger community, so
knowledge structure can be used to understand business managers need to recognize the impact of
and form a judgment in a new situation, then both AI from an ethical standpoint.
intuition and AI become less effective. For Relatedly, the impact of AI on justice percep-
example, certain disruptive events, such as the tions is another important area of research. As
COVID-19 pandemic, may completely shift the previously noted, people may be less trusting of
business landscape such that prior knowledge decisions derived through algorithms than those
about a particular business environment may not made by humans, especially when those decisions
be as useful in making future decisions. Decision have an adverse impact on them. Owing to the lack
makers must be cautious in using this model in of explainability of AI-based decision-making,
such circumstances. there may be concerns about procedural and
distributive justice (Tambe et al., 2019). There is
insufficient empirical research for anyone to truly
12. Future research considerations understand the impact of AI-based decision-mak-
ing on justice perceptions. Will combining AI with
Given the increasing integration of AI in organiza- human intuition add to or alleviate these concerns?
tional decision-making, it is important for both in- What steps can organizations take to reduce
dustry and academia to better understand not only negative perceptions related to the use of AI?
Intuition-AI in organizational decision-making 437
Answers to such questions have important impli- Barnard, C. (1938). The functions of the executive. Cambridge,
cations for the successful adoption of AI in orga- MA: Harvard University Press.
Burke, L. A., & Miller, M. K. (1999). Taking the mystery out of
nizational decision-making. intuitive decision-making. The Academy of Management
AI regulation is another critical area of research. Executive, 13(4), 91e99.
As the role of AI in organizational decision-making Chase, W. G., & Simon, H. A. (1973). Perception in chess.
continues to expand, how best can AI be regulated Cognitive Psychology, 4(1), 55e81.
to minimize harm to humans and to hold organiza- Chen, Y., Tsai, W., & Hu, C. (2008). The influences of
interviewer-related and situational factors on interviewer
tions accountable for their actions? Haenlein and reactions to highly structured job interviews. Interna-
Kaplan (2019) argue that instead of regulating AI tional Journal of Human Resource Management, 19(6),
systems, a better way is to develop commonly 1056e1071.
accepted requirements related to the training and Dane, E., & Pratt, M. G. (2007). Exploring intuition and its role
testing of AI algorithms. Given the rapid change in in managerial decision-making. Academy of Management
Review, 32(1), 33e54.
AI technology, the authors argue that this method Dane, E., & Pratt, M. G. (2009). Conceptualizing and measuring
would enable stable regulation even if the tech- intuition: A review of recent trends. International Review of
nology changes over time. By identifying effective Industrial and Organizational Psychology, 24, 1e40.
ways to regulate AI, academic research can Dane, E., Rockmann, K. W., & Pratt, M. G. (2012). When should I
immensely contribute to the healthy integration of trust my gut? Linking domain expertise to intuitive decision-
making effectiveness. Organizational Behavior and Human
AI in organizational processes. Decision Processes, 119(2), 187e194.
Dastin, J. (2018, October 10). Amazon scraps secret AI
recruiting tool that showed bias against women. Reuters.
13. Conclusion Available at https://www.reuters.com/article/us-amazon-
com-jobs-automation-insight/amazon-scraps-secret-ai-
recruiting-tool-that-showed-bias-against-women-
The decision-making model presented in this idUSKCN1MK08G
article is an integrative one that combines the Denhardt, R. B., & Dugan, H. S. (1978). Managerial intuition:
strengths of expert intuition and AI to solve ill- Lessons from Barnard and Jung. Business and Society, 19(1),
structured organizational problems. In a fast- 26e30.
Dennis, A. R. (1996). Information exchange and use in small
changing environment where AI has transformed group decision-making. Small Group Research, 27(4),
organizational processes, the model identifies an 532e550.
effective way to combine complementary skills of Dietvorst, B. J., Simmons, J. P., & Massey, C. (2018). Over-
the two modes while minimizing the vulnerabilities coming algorithm aversion: People will use imperfect algo-
of each method. Throughout history, human intu- rithms if they can (even slightly) modify them. Management
Science, 64(3), 1155e1170.
ition has had an extraordinary impact on innova- Dijksterhuis, A. (2004). Think different: The merits of uncon-
tion and advancement, including too on the scious thought in preference development and decision-
creation and development of AI. Therefore, rather making. Journal of Personality and Social Psychology, 87(5),
than attempting to replace this unique human skill 586e598.
with advanced technology, organizations will Dijkstra, K. A., Pligt, J., & Kleef, G. A. (2013). Deliberation
versus intuition: Decomposing the role of expertise in
greatly benefit from the integration of intuition judgment and decision-making. Journal of Behavioral Deci-
and AI to improve organizational decision-making. sion Making, 26(3), 285e294.
Frantz, R. (2003). Herbert Simon. Artificial intelligence as a
framework for understanding intuition. Journal of Economic
References Psychology, 24(2), 265e277.
Friedman, L., Howell, W. C., & Jensen, C. R. (1985). Diagnostic
Agor, W. H. (1986). The logic of intuition: How top executives judgment as a function of the preprocessing of evidence.
make important decisions. Organizational Dynamics, 14(3), Human Factors: The Journal of the Human Factors and Er-
5e18. gonomics Society, 27(6), 665e673.
Agrawal, A., Gans, J. S., & Goldfarb, A. (2019). Exploring the Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F.,
impact of artificial intelligence: Prediction versus judgment. & Pedreschi, D. (2018). A survey of methods for explaining
Information Economics and Policy, 47, 1e6. black box models. ACM Computing Surveys, 51(5), 1e42.
Akinci, C., & Sadler-Smith, E. (2013). Assessing individual dif- Haenlein, M., & Kaplan, A. (2019). A brief history of artificial
ferences in experiential (intuitive) and rational (analytical) intelligence: On the past, present, and future of artificial
cognitive styles. International Journal of Selection and intelligence. California Management Review, 61(4), 5e14.
Assessment, 21(2), 211e221. Hammond, K. R., Hamm, R. M., Grassia, J., & Pearson, T.
Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016, May 23). (1987). Direct comparison of the efficacy of intuitive and
Machine bias: There’s software used across the country to analytical cognition in expert judgment. IEEE Transactions
predict future criminals. And it’s biased against blacks. on Systems, Man, and Cybernetics, 17(5), 753e770.
ProPublica. Available at https://www.propublica.org/ Hodgkinson, G. P., Sadler-Smith, E., Burke, L. A., Claxton, G., &
article/machine-bias-risk-assessments-in-criminal- Sparrow, P. R. (2009). Intuition in organizations: Implications
sentencing
438 V.U. Vincent
for strategic management. Long Range Planning, 42(3), Miles, A., & Sadler-Smith, E. (2014). “With recruitment I always
277e297. feel I need to listen to my gut”: The role of intuition in
Hogarth, R. M. (2002). Deciding analytically or trusting your employee selection. Personnel Review, 43(4), 606e627.
intuition? The advantages and disadvantages of analytic and Miller, D. D., & Brown, E. W. (2018). Artificial intelligence in
intuitive thought (Working Paper No. 654). Barcelona, medical practice: The question to the answer? The American
Spain: Universitat Pompeu Fabra. Journal of Medicine, 131(2), 129e133.
Huang, M.-H., Rust, R., & Maksimovic, V. (2019). The feeling Moravcı́k, M., Schmid, M., Burch, N., Lisý, V., Morrill, D.,
economy: Managing in the next generation of artificial in- Bard, N., Davis, T., Waugh, K., Johanson, M., & Bowling, M.
telligence (AI). California Management Review, 61(4), (2017). Deepstack: Expert-level artificial intelligence in
43e65. heads-up no-limit poker. Science, 356(6337), 508e513.
Ingold, D., & Soper, S. (2016, April 21). Amazon doesn’t consider Prabhakar, A. (2017, January 27). The merging of humans and
the race of its customers. Should it? Bloomberg. Available at machines is happening now. Wired. Available at https://
https://www.bloomberg.com/graphics/2016-amazon-same- www.wired.co.uk/article/darpa-arati-prabhakar-humans-
day/ machines
Jarrahi, M. H. (2018). Artificial intelligence and the future of Pretz, J. E. (2008). Intuition versus analysis: Strategy and
work: Human-AI symbiosis in organizational decision-mak- experience in complex everyday problem solving. Memory
ing. Business Horizons, 61(4), 577e586. and Cognition, 36(3), 554e566.
Kahneman, D., & Klein, G. (2009). Conditions for intuitive Rosenberg, L. (2016). Artificial swarm intelligence vs human ex-
expertise: A failure to disagree. American Psychologist, perts. Proceedings of the 2016 International Joint Conference
64(6), 515e526. on Neural Networks (pp. 2547e2551). New York, NY: IEEE.
Kahneman, D., & Tversky, A. (1973). On the psychology of Saenz, M. J., Revilla, E., & Simón, C. (2020, March 18). Designing AI
prediction. Psychological Review, 80(4), 237e251. systems with human-machine teams. MIT Sloan Management
Kahneman, D., & Tversky, A. (1983). Extensional versus intuitive Review. Available at https://sloanreview.mit.edu/article/
reasoning: The conjunction fallacy in probability judgment. designing-ai-systems-with-human-machine-teams/
Psychological Review, 90(4), 293e315. Schmidhuber, J. (2015). Deep learning in neural networks: An
Kaplan, A., & Haenlein, M. (2019). Siri, siri, in my hand: Who’s overview. Neural Networks, 61, 85e117.
the fairest in the land? On the interpretations, illustrations, Shapiro, S., & Spence, M. T. (1997). Managerial intuition: A
and implications of artificial intelligence. Business Horizons, conceptual and operational framework. Business Horizons,
62(1), 15e25. 40(1), 63e69.
Klein, G. A. (1993). A recognition-primed decision (RPD) model Shrestha, Y. R., Ben-Menahem, S. M., & von Krogh, G. (2019).
of rapid decision-making. In G. A. Klein, J. Orasanu, Organizational decision-making structures in the age of
R. Calderwood, & C. E. Zsambok (Eds.), Decision-making in artificial intelligence. California Management Review,
action: Models and methods (pp. 138e147). New York, NY: 61(4), 66e83.
Ablex Publishing. Simon, H. (1957). Administrative behavior (2nd ed.). New York,
Klein, G. A., Calderwood, R., & Clinton-Cirocco, A. (1986). NY: Macmillan.
Rapid decision-making on the fire ground. In Proceedings of Simon, H. A. (1987). Making management decisions: The role of
the Human Factors and Ergonomics Society annual meeting intuition and emotion. Academy of Management Perspec-
(Vol. 30, No. 6, pp. 576e580). Thousand Oaks, CA: Sage. tives, 1(1), 57e64.
Klein, G. A., Calderwood, R., & Macgregor, D. (1989). Critical Simon, H. A. (1992). What is an “explanation” of behavior?
decision method for eliciting knowledge. IEEE Transactions Psychological Science, 3(3), 150e161.
on Systems, Man, and Cybernetics, 19(3), 462e472. Strickland, E. (2019). IBM Watson, heal thyself: How IBM over-
Klein, G. A., & Klinger, D. (1991). Naturalistic decision-making promised and underdelivered on AI health care. IEEE Spec-
human systems. IAC Gateway, 11(3). Available at https:// trum, 56(4), 24e31.
doi.org/10.1518/001872008X288385 Tambe, P., Cappelli, P., & Yakubovich, V. (2019). Artificial intelli-
Lipshitz, R., Klein, G., Orasanu, J., & Salas, E. (2001). Taking gence in human resources management: Challenges and a path
stock of naturalistic decision-making. Journal of Behavioral forward. California Management Review, 61(4), 15e42.
Decision Making, 14(5), 331e352. Wilson, H. J., & Daugherty, P. R. (2018). Collaborative intelli-
Metcalf, L., Askay, D. A., & Rosenberg, L. B. (2019). Keeping gence: Humans and AI are joining forces. Harvard Business
humans in the loop: Pooling knowledge through artificial Review, 96(4), 114e123.
swarm intelligence to improve business decision-making. Young, T., Hazarika, D., Poria, S., & Cambria, E. (2018). Recent
California Management Review, 61(4), 84e109. trends in deep learning based natural language processing.
IEEE Computational Intelligence Magazine, 13(3), 55e75.