The Emerging Risk of Virtual Societal Warfare
The Emerging Risk of Virtual Societal Warfare
The Emerging Risk of Virtual Societal Warfare
of Virtual Societal
Warfare
Social Manipulation in a Changing
Information Environment
C O R P O R AT I O N
For more information on this publication, visit www.rand.org/t/RR2714
RAND’s publications do not necessarily reflect the opinions of its research clients and sponsors.
Support RAND
Make a tax-deductible charitable contribution at
www.rand.org/giving/contribute
www.rand.org
Preface
iii
Contents
Preface. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii
Table.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
Summary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
Acknowledgments. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xix
Abbreviations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxi
CHAPTER ONE
A New Form of Conflict. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Definitions and Concepts. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
Approach and Methodology. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
CHAPTER TWO
The Evolving Infosphere. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
Broader Social Trends. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
Trends in the Infosphere: Knowledge and Awareness. . . . . . . . . . . . . . . . . . . . . . . . . 18
Networked Dynamics and the Viral Spread of Information.. . . . . . . . . . . . . . . . . 21
Sensationalism. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
Fragmentation of Information Sources. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
Concentration of Information Platforms. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
The Role of Echo Chambers.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
The Role of Influencers. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
The Trolling Ethic. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
The Rule of Data.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
Summary: A Changing Information Environment for Advanced
Democracies. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
v
vi The Emerging Risk of Virtual Societal Warfare
CHAPTER THREE
Insights from Social Science. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
Attitudes and Attitude Change.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
When Attitudes Change: Criteria for Attitude Shift. . . . . . . . . . . . . . . . . . . . . . . . . 48
Correcting Disinformation: Beyond the Backfire Effect. . . . . . . . . . . . . . . . . . . . . . 55
Factors Influencing Social Trust. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
The Nature of Attitudes and the Shifting Infosphere. . . . . . . . . . . . . . . . . . . . . . . . . 62
CHAPTER FOUR
Emerging Technologies. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
Precision Targeting of Influence. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
Artificial Intelligence. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
Algorithmic Decisionmaking.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
Virtual and Augmented Reality. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
Internet of Things.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
Voice-Enabled Interfaces. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
Blockchain and Distributed Ledger Systems. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
Video and Audio Fakery. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
Surveillance Systems. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
Conclusion: A Rapidly Changing Infosphere.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
CHAPTER FIVE
Future 1: The Death of Reality (2025 Scenario). . . . . . . . . . . . . . . . . . . . . . . . . . . 97
A Note About Methodology. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
The General Future. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
A Future of Virtual and Invented Reality. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
The End of a Shared Picture of the World. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
Information Aggression and Manipulation in the Death of Reality. . . . . . . 112
CHAPTER SIX
Future 2: Silos of Belief (2024 Scenario). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
The General Future. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
A Fragmented Society. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
The Wars of the Silos. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
A Burgeoning Landscape of Cyberharassment. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
Information-Based Aggression and the Silo Future. . . . . . . . . . . . . . . . . . . . . . . . . . 128
Contents vii
CHAPTER SEVEN
Future 3: The Rise of the Algorithms (2026 Scenario). . . . . . . . . . . . . . . . . . 131
The General Future. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
The Cloud of Knowing.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
Interactive Data. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
The Cloud Knows Where You Have Been—and What You Think. . . . . . . 140
Ruled by Algorithm: Surrendering to the POPE.. . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
Vulnerabilities and Risks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
Information Aggression in an Algorithmic Future.. . . . . . . . . . . . . . . . . . . . . . . . . . 149
CHAPTER EIGHT
The Emerging Risk of Virtual Societal Warfare. . . . . . . . . . . . . . . . . . . . . . . . . . 153
A New Form of Conflict: Virtual Societal Warfare. . . . . . . . . . . . . . . . . . . . . . . . . 155
Dealing with the Threat of Virtual Societal Aggression. . . . . . . . . . . . . . . . . . . . 161
Designing a Response: An Initial Agenda. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
References. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
Table
ix
Summary
xi
xii The Emerging Risk of Virtual Societal Warfare
at levels of social trust, creating the sense that the institutions and pro-
cesses of advanced societies cannot be trusted and generating a sense of
persistent insecurity and anxiety.
To shed light on how these techniques might evolve, RAND
researchers built on a first-phase analysis from this project that focused
on Russian and Chinese efforts at hostile social manipulation. This
project was not yet aimed at solutions, but rather understanding—i.e.,
comprehending the character of the emerging challenge. It was designed
to set the stage for more-detailed discussion of potential responses to
the threat. But one lesson of this phase of research is that many of these
trends, technologies, and capabilities remain poorly understood, and
some possible responses have potentially dramatic implications for the
operation of the information environment, the character of free speech,
and other issues. It would be dangerous to begin promulgating possible
solutions without rigorous analysis of their likely consequences. This
report is designed to set the stage for such work by first analyzing the
scope and nature of the problem.
To understand the risk of virtual societal warfare, we surveyed
evidence in a range of categories to sketch out some initial contours
of how these techniques might evolve in the future. We grounded the
assessment in (1) detailed research on trends in the changing char-
acter of the information environment in the United States and other
advanced democracies; (2) the insights of social science research on
attitudes and beliefs; and (3) developments in relevant emerging tech-
nologies that bear on the practices of hostile social manipulation and
its more elaborate and dangerous cousin, virtual societal warfare (terms
which are defined further in Chapter One). In all three cases, we gath-
ered data on established research findings and existing trends.
Chapter Two offers our analysis of the characteristics of the
infosphere—the context in which such hostile techniques will be
employed. Chapter Three derives insights from social science, survey-
ing what research into beliefs and attitudes suggests about the forms
of social manipulation likely to be most effective. Chapter Four exam-
ines current developments in several technologies, from artificial intel-
ligence to virtual reality and the IoT, that could play a role in future
manipulation campaigns.
Summary xv
the platforms can do over the next two to three years to make a
dent in the problem.
• Make investments designed to erect new, broadly trusted informa-
tional mediating institutions that can help Americans make sense of
events. Governments (as well as private foundations and activists)
can also prompt trial-and-error work among information compa-
nies, such as internet browsers (especially those willing to take the
lead in new approaches), to experiment with revised algorithms,
new browser extensions, and rating and ranking different sites
and sources to see what works. The goal would be to send sig-
nals that would contribute to the overall inoculation effect being
sought by government policy. A major source of the challenges
today is the decline of any respected, trusted intermediary sources
that the public can rely on to get a sense of whether what they are
seeing is accurate. Apart from basic fact-checking organizations,
experimenting with different varieties of such revised intermedi-
ary institutions could help mitigate the effect of virtual societal
aggression.
• Begin working toward international norms constraining the use of
virtual societal aggression. The biggest risk of virtual societal war-
fare may be that it represents an insidious, gradual degradation of
the territorial integrity norm that has largely prevailed since 1945
and helped to keep the peace among countries. To the extent that
nations begin attacking one another in virtual but highly damag-
ing ways, the prevailing consensus on territorial nonaggression
could collapse, leading eventually to large-scale armed adventur-
ism. As with other forms of aggression, deterrence can contribute
strongly to defense, but so can international norms that help tie
the status and prestige of countries to their respect for fundamen-
tal principles.
• Better understand the workings and vulnerabilities of emerging tech-
nologies, especially artificial intelligence–driven information chan-
nels, virtual and augmented reality, and algorithmic decisionmak-
ing. If the United States and other democracies are not careful,
advances in the private application of these technologies will race
ahead of policy and even understanding, creating intense vulner-
xviii The Emerging Risk of Virtual Societal Warfare
The authors would like to thank the Office of Net Assessment for its
support and guidance of the project. We extend our sincere thanks to
the management of the RAND International Security and Defense
Policy Center—Seth Jones (for the beginning of the project), Christine
Wormuth (for its latter phases), Mike McNerney, and Rich Girven—
for their help in managing the project and developing the reports.
We would like to thank formal peer reviewers Miriam Matthews of
RAND and Adam Segal of the Council on Foreign Relations for their
very helpful reviews. The authors also received informal comments on
earlier drafts from Tim Hwang and Philip Smith, and we are very
grateful for their assistance.
xix
Abbreviations
AI artificial intelligence
AR augmented reality
CGI computer-generated imagery
EVM Eulerian Video Magnification
GAN generative adversarial network
GPS Global Positioning System
HD high definition
IIWAM information/influence warfare and manipulation
IoT Internet of Things
NATO North Atlantic Treaty Organization
PFR pervasive facial recognition
POPE principle of passive election
PTSD posttraumatic stress disorder
RFID radio-frequency identification
VR virtual reality
xxi
CHAPTER ONE
1 Nicholas Thompson and Fred Vogelstein, “Facebook’s Two Years of Hell,” Wired,
March 2018.
1
2 The Emerging Risk of Virtual Societal Warfare
2 One of the classic accounts is Manuel Castells, The Rise of the Network Society, New York:
Wiley-Blackwell, 2009.
4 The Emerging Risk of Virtual Societal Warfare
3 Bruce Schneier, Click Here to Kill Everybody: Security and Survival in a Hyper-Connected
World, New York: W. W. Norton, 2018, p. 78.
4 Jennifer Kavanagh and Michael D. Rich, Truth Decay: An Initial Exploration of the
Diminishing Role of Facts and Analysis in American Public Life, Santa Monica, Calif.: RAND
Corporation, RR-2314-RC, 2018, pp. x–xi.
A New Form of Conflict 5
ings will suggest, forced attitude change is among the most demand-
ing goals of any such strategy. They could seek to take existing atti-
tudes and push them toward more extreme ends of the spectrum, or
they could seek to normalize and support groups with extreme views.
They could try to catalyze existing impulses into action, as when social
media posts have managed to generate actual protests or rallies that
would not have occurred otherwise. They could seek to disrupt the
activities and effectiveness of an information-based economy, imposing
economic costs in the process.
To be sure, the causal link between social manipulation and
outcomes—beliefs or behavior—is not always straight or linear. A
society’s foundation of attitudes, beliefs, and behavior patterns is not
subject to easy, direct manipulation. Changing attitudes is hard, and
research suggests that the link between attitudes and behavior can be
weak.5 Our other research suggests that while manipulation campaigns
can sometimes produce significant measurable outputs, such as num-
bers of tweets or posts, the actual outcomes, such as changes in attitudes
or behaviors, are much tougher to find. There is no simple relation-
ship between social manipulation programs and results in the target
country.
In this study, we use the term infosphere to refer to the ongoing social
process of information production, dissemination, and perception in
a society.6 A society’s infosphere is, most simply, its information envi-
5 See, for example, Joshua J. Guyer and Leandre R. Fabrigar, “Attitudes and Behavior,” in
James Wright, ed., International Encyclopedia of the Social & Behavioral Sciences, Vol. II, 2nd
ed., New York: Elsevier, 2015.
6 For a similar definition, see John Arquilla and David Ronfeldt, The Emergence of Noo-
politik: Toward an American Information Strategy, Santa Monica, Calif.: RAND Corpora-
tion, MR-1033-OSD, 1999, pp. 11–12, 16–17. For a U.S. Department of Defense defini-
tion of the “information environment,” see U.S. Department of Defense, Joint Publication
1-02: Department of Defense Dictionary of Military and Associated Terms, Washington, D.C.,
November 2010.
6 The Emerging Risk of Virtual Societal Warfare
7 This approach is very close in spirit to the concept of information/influence warfare and
manipulation (IIWAM) offered by Herbert Lin and Jackie Kerr. They define IIWAM as
“the deliberate use of information by one party on an adversary to confuse, mislead, and
ultimately to influence the choices and decisions that the adversary makes.” It is thus a
“hostile non-kinetic activity” whose targets are “the adversary’s perceptions.” Their concept
of IIWAM is therefore distinct from classic cyberaggression because attacks in the IIWAM
realm focus on “damaging knowledge, truth, and confidence, rather than physical or digital
artifacts. . . . IIWAM seeks to inject fear, anxiety, uncertainty, and doubt into the adversary’s
decision making processes.” Yet they still recognize that many IIWAM attacks will be made
A New Form of Conflict 7
Key Terms
• Infosphere. The ongoing process of producing, disseminating, and perceiv-
ing information in a society, including media, data-based algorithmic pro-
cesses, and information exchange in networks.
• Hostile social manipulation. The purposeful, systematic generation and
dissemination of information to produce harmful social, political, and
economic outcomes in a target country by affecting beliefs, attitudes, and
behavior. Tends to focus on manipulating beliefs, perceptions, and facts.
• Virtual societal warfare. The most elaborate or extreme form of hostile
social manipulation that encompasses that term or concept but implies
a more broad-based effort to disrupt and manipulate the information
networks of a society. Can include mechanisms to degrade or manipulate
outcomes from electronic networks, algorithmic decisionmaking, and
virtual and augmented reality. The concept refers to a gradual, persistent
approach to such goals.
• Cyber infrastructure attack. Efforts to use malware or other forms of
cyberaggression to cause catastrophic damage to major economic or social
infrastructure to create significant physical damage, harm to individuals, or
social disruption and chaos.
possible by cyber intrusions of one sort or another, and so they also use the term “cyber-
enabled IIWAM.” See Herbert Lin and Jackie Kerr, On Cyber-Enabled Information/Influence
Warfare and Manipulation, working paper, Stanford, Calif.: Stanford Center for Interna-
tional Security and Cooperation, August 13, 2017, pp. 5–7.
8 Attacks on information security can be used for a wide range of purposes, but a major
national security focus has been on the use of cyber tools to infiltrate, disrupt, and poten-
tially cause severe damage to critical infrastructure in a society. For one treatment of these
risks, see Richard A. Clarke, Cyber War: The Next Threat to National Security and What to
Do About It, New York: Ecco, 2010.
8 The Emerging Risk of Virtual Societal Warfare
9 Neal A. Pollard, Adam Segal, and Matthew G. Devost, “Trust War: Dangerous Trends
in Cyber Conflict,” War on the Rocks, January 16, 2018.
10 The Emerging Risk of Virtual Societal Warfare
10 See, for example, Anthony Giddens, The Consequences of Modernity, Cambridge, United
Kingdom: Polity, 1990.
A New Form of Conflict 11
To shed light on how these techniques might evolve, this analysis took
two primary approaches. First, we grounded the assessment in detailed
research on three foundational issues: (1) trends in the changing char-
acter of the infosphere in the United States and other advanced democ-
racies; (2) the insights of social science research on attitudes and beliefs;
and (3) developments in relevant emerging technologies that bear on
the practice of social manipulation. In all three cases, we gathered data
on established research findings and existing trends. We also asked
how they were likely to play out over the next five to ten years.
Chapter Two offers our analysis of the characteristics of the
emerging infosphere, that is, the nature of the information environ-
ment in which social manipulation will play out, and its implications
for the future of such techniques. Chapter Three derives insights from
social science, surveying what research into beliefs and attitudes sug-
gests about the forms of social manipulation likely to be most effective.
Chapter Four examines current developments in several technologies,
from AI to VR and the IoT, that could play a role in future manipula-
tion campaigns.
The second primary approach taken in this analysis was to
employ a scenario planning methodology to describe the possible shape
of social manipulation futures. In Chapters Five through Seven, we
sketch out three scenarios for how social manipulation could affect
advanced societies over the next decade, based on the findings of the
research on trends and realities, the insights of the parallel study on
Russian and Chinese social manipulation strategies, and other research.
The three are not mutually exclusive; each one emphasizes a different
theme, but elements of all three are likely to combine to characterize
an actual future. In each case, we cite extensive research to support
different assumptions of the scenarios, and we describe ways in which
aggressive social manipulators could use the aspects of that scenario to
gain advantage.
Finally, Chapter Eight offers overall findings and lessons from the
analysis.
CHAPTER TWO
1 This argument is made in Stephan Lewandowsky, Ullrich K. H. Ecker, and John Cook,
“Beyond Misinformation: Understanding and Coping with the ‘Post-Truth’ Era,” Journal of
Applied Research in Memory and Cognition, Vol. 6, No. 4, December 2017.
2 Kavanagh and Rich, 2018, pp. 41–78.
13
14 The Emerging Risk of Virtual Societal Warfare
4 James W. Moore, “What Is the Sense of Agency and Why Does It Matter?” Frontiers in
Psychology, Vol. 7, Article 1272, August 2016.
5 Michael Specter, Denialism: How Irrational Thinking Hinders Scientific Progress, Harms
the Planet, and Threatens Our Lives, New York: Penguin Press, 2009, p. 33.
6 Jennifer
Mitzen, “Ontological Security in World Politics: State Identity and the Security
Dilemma,” European Journal of International Relations, Vol. 12, No. 3, 2006.
7 Nathaniel Persily, “Can Democracy Survive the Internet?” Journal of Democracy, Vol. 28,
No. 2, April 2017, p. 64.
8 Art Swift, “Americans’ Trust in Mass Media Sinks to New Low,” Gallup, September 14,
2016. See Kavanagh and Rich, 2018, pp. 33–35, 108–109.
16 The Emerging Risk of Virtual Societal Warfare
9 For one series of data, see Gallup, In Depth: Topics A to Z: Confidence in Institutions, 2018.
10 United States Code, Title 47, Section 230, Protection for Private Blocking and Screening
of Offensive Material, January 3, 2012.
11
Jonathan Vanian, “Facebook, Twitter Take New Steps to Combat Fake News and
Manipulation,” Fortune, January 20, 2018.
12 Cass R. Sunstein, #Republic: Divided Democracy in the Age of Social Media, Princeton,
N.J.: Princeton University Press, 2017, p. 20.
13 Lewandowsky, Ecker, and Cook, 2017, p. 357.
The Evolving Infosphere 17
14 RobertPutnam, Bowling Alone: The Collapse and Revival of American Community, New
York: Simon and Schuster, 2001.
15 Amanda Taub, “The Real Story About Fake News Is Partisanship,” New York Times,
January 11, 2017.
16 Levi Boxell, Matthew Gentzkow, and Jesse M. Shapiro, “Greater Internet Use Is Not
Associated with Faster Growth in Political Polarization Among U.S. Demographic Groups,”
Proceedings of the National Academy of Sciences, Vol. 114, No. 40, October 2017a.
17 Gregory J. Martin and Ali Yurukoglu, “Bias in Cable News: Persuasion and Polarization,”
American Economic Review, Vol. 107, No. 9, 2017. See also Levi Boxell, Matthew Gentzkow,
and Jesse M. Shapiro, “Is Media Driving Americans Apart?” New York Times, December 6,
2017b.
18 The Emerging Risk of Virtual Societal Warfare
18 The literature on the populist wave in the West is vast, including numerous books and
reports and hundreds of news accounts. For broad assessments, see John B. Judis, The Popu-
list Explosion, New York: Columbia Global Reports, 2016; and Jan-Werner Müller, What
Is Populism? Philadelphia, Pa.: University of Pennsylvania Press, 2016. For an examination
of the economic versus cultural arguments for the populist wave, see Ronald Inglehart and
Pippa Norris, Trump, Brexit, and the Rise of Populism: Economic Have-Nots and Cultural
Backlash, faculty research working paper, Cambridge, Mass.: Harvard Kennedy School,
RWP16-026, August 2016.
19 Rand Waltzman, “The Weaponization of Information: The Need for Cognitive Secu-
rity,” testimony presented before the Senate Armed Forces Committee, Subcommittee on
Cybersecurity, Santa Monica, Calif.: RAND Corporation, CT-473, April 27, 2017, p. 3.
The Evolving Infosphere 19
20 On the general issue of ignorance, see Jason Brennan, “Trump Won Because Voters Are
Ignorant, Literally,” Foreign Policy, November 10, 2016; Jared Meyer, “The Ignorant Voter,”
Forbes, June 27, 2016; and Andrew Romano, “How Ignorant Are Americans?” Newsweek,
March 20, 2011.
21 D. J. Flynn, Brendan Nyhan, and Jason Reifler, “The Nature and Origins of Mispercep-
tions: Understanding False and Unsupported Beliefs About Politics,” Political Psychology,
Vol. 38, Supp. 1, 2017, p. 129. They also cite other literature on this point.
22 Tom Nichols, “Our Graduates Are Rubes,” Chronicle of Higher Education, January 15,
2017a.
23 Kavanagh and Rich, 2018, pp. 106–107.
20 The Emerging Risk of Virtual Societal Warfare
24 Pew Research Center, Public Knowledge of Current Affairs Little Changed by News and
Information Revolutions, Washington, D.C., April 15, 2007, p. 1.
25 Bruce Wharton, “Remarks on ‘Public Diplomacy in a Post-Truth Society,’” in Shawn
Powers and Markos Kounalakis, eds., Can Public Diplomacy Survive the Internet? Washing-
ton, D.C.: U.S. Advisory Commission on Public Diplomacy, 2017, p. 8.
26 A classic treatment is Ziva Kunda, “The Case for Motivated Reasoning,” Psychological
Bulletin, Vol. 108, No. 3, November 1990. See also Nicholas Epley and Thomas Gilovich,
“The Mechanics of Motivated Reasoning,” Journal of Economic Perspectives, Vol. 30, No. 3,
2016; and David P. Redlawsk, “Hot Cognition or Cool Consideration? Testing the Effects
of Motivated Reasoning on Political Decision Making,” Journal of Politics, Vol. 64, No. 4,
2002.
The Evolving Infosphere 21
27 This is the central thesis of Albert-László Barabási, Linked: How Everything Is Connected
to Everything Else and What It Means for Business, Science, and Everyday Life, New York: Basic
Books, 2014, p. 5.
28 Sunstein, 2017, pp. 102–103, cites cases to suggest that the reasons for informational
cascades remain unclear, and the actual shared information (such as songs) that goes viral is
fairly random.
22 The Emerging Risk of Virtual Societal Warfare
29 See, for example, Justin Cheng, Lada Adamic, P. Alex Dow, Jon Michael Kleinberg,
and Jure Leskovec, “Can Cascades Be Predicted?” Twenty-Third International World Wide
Web Conference, Conference Proceedings, Seoul: Association for Computing Machinery, 2014.
For a broader treatment, see Lee Daniel Kravetz, Strange Contagion, New York: Harper-
Collins, 2017, who emphasizes the role of social connections in sparking viral spread. The
marketing professor Jonah Berger has studied the issue and proposed six essential character-
istics of virality: social currency (facts that people want to share because they look cool and
in-the-know); triggering stimuli; the use of emotional appeals; popular attention; facts that
have practical value to people; and facts that are embedded in narratives or stories; Jonah
Berger, Contagious: Why Things Catch On, New York: Simon and Schuster, 2013.
30 One massive study of viral patterns on Twitter, for example, found that although more-
influential users were more associated with the viral spread of stories or hashtags, in fact
virality remained a relatively unpredictable phenomenon, meaning that the only effective
strategy to spread information was simply to target as many influencers as possible and hope
for the best. See Eytan Bakshy, Jake M. Hofman, Winter A. Mason, and Duncan J. Watts,
“Everyone’s an Influencer: Quantifying Influence on Twitter,” Fourth ACM International
Conference on Web Search and Data Mining, Conference Proceedings, Hong Kong: Association
for Computing Machinery, 2011.
31 Soroush Vosoughi, Deb Roy, and Sinan Aral, “The Spread of True and False News
Online,” Science, Vol. 359, No. 6380, 2018, p. 1146.
The Evolving Infosphere 23
social influence. People are following what other people are saying,
doing, and “liking.” The specific reasons why one story, meme, gif, or
song goes viral as opposed to another may be mysterious, but the basic
mechanism is not: It is a form of social proof effect. One implication
is that such cascades can be triggered by early and significant endorse-
ments, especially from socially influential actors. Either way, cascades
can either start or run aground very quickly, pointing again to a critical
first-mover advantage in the social manipulation space.32
Information is not the only thing that can spread through digi-
tized networks. A 2012 Facebook study of almost 700,000 users found
that “moods are contagious.” By changing the balance of positive and
negative stories going into people’s news feeds, Facebook was able to
shift their moods in measurable ways, and produce resulting behaviors,
such as higher numbers of negative posts, from those people. “Emo-
tional states,” the study concludes, “can be transferred to others.”33
Sensationalism
A major trend in the news media and the infosphere more broadly in
the last three decades has been a measurable increase in the degree of
sensationalism in the content and style of reporting. The general prob-
lem of sensationalism is not new—today’s trend mirrors earlier periods,
such as the infamous age of yellow journalism at the end of the 19th
century.34 Moreover, while there is a strong anecdotal sense of the rise
of sensationalism, few studies have tried to measure its growth over
the last decade. Nonetheless, evidence from a range of sources suggests
that mainstream and niche information sources are increasingly rely-
ing on sensational accounts to attract attention.
Part of the engine of growing sensationalism is that many web-
sites are purpose-built to generate sensationalistic content in order to
35 Eli Pariser, The Filter Bubble: What the Internet Is Hiding from You, New York: Penguin
Press, 2011, p. 69.
36 See, for example, P. H. Vettehen and M. Kleemans, “Proving the Obvious? What Sensa-
tionalism Contributes to the Time Spent on News Video,” Electronic News, Vol. 12, No. 2,
2018; and D. K. Thussu, News as Entertainment: The Rise of Global Infotainment, London:
SAGE Publications, 2007.
37 Jack Nicas, “How YouTube Drives People to the Internet’s Darkest Corners,” Wall Street
Journal, February 7, 2018.
38 Bharat N. Anand, “The U.S. Media’s Problems Are Much Bigger than Fake News and
Filter Bubbles,” Harvard Business Review, January 5, 2017.
The Evolving Infosphere 25
39 Anand, 2017.
40 Robert Kozinets, “How Social Media Fires People’s Passions—and Builds Extremist
Divisions,” The Conversation, November 13, 2017.
41 Michael J. Mazarr, “The Pessimism Syndrome,” Washington Quarterly, Vol. 21, No. 3,
1998.
42 See Tom Stafford, “Psychology: Why Bad News Dominates the Headlines,” BBC,
July 29, 2014.
43 Tim Wu, The Attention Merchants: The Epic Scramble to Get Inside Our Heads, New York:
Vintage Books, 2016.
26 The Emerging Risk of Virtual Societal Warfare
44 Giddens, 1990, p. 2.
The Evolving Infosphere 27
45 Elisa Shearer and Jeffrey Gottfried, “News Use Across Social Media Platforms 2017,” Pew
Research Center, September 7, 2017.
46 Kristen Bialik and Katerina Eva Matsa, “Key Trends in Social and Digital News Media,”
Pew Research Center, October 4, 2017.
47 Katarina Eva Matsa and Elisa Shearer, “News Use Across Social Media Platforms 2018,”
Pew Research Center, September 10, 2018.
48 Research has found that while “single-source” or single-issue information or massively
tilted information environments can have significant effects on attitudes and behavior, these
effects drop off quickly once the ground is meaningfully contested or blurred among many
different issues; Anthony R. Pratkanis and Elliot Aronson, Age of Propaganda: The Everyday
Use and Abuse of Persuasion, New York: Henry Holt and Company, 2001, pp. 29–30, 83–84.
49 Bialik and Matsa, 2017.
28 The Emerging Risk of Virtual Societal Warfare
50 Conor Sen, “The ‘Big Five’ Could Destroy the Tech Ecosystem,” Bloomberg, Novem-
ber 15, 2017.
51 Farhad Manjoo, “Tech’s ‘Frightful 5’ Will Dominate Digital Life for Foreseeable Future,”
New York Times, January 20, 2016.
The Evolving Infosphere 29
52 National Public Radio, “How 5 Tech Giants Have Become More Like Governments
Than Companies,” Fresh Air, October 26, 2017.
30 The Emerging Risk of Virtual Societal Warfare
“if you liked that, you are likely to enjoy this” rule of providing content
that fits established patterns. The result would be to reinforce those
beliefs and perhaps spur them to greater degrees of extremism.
Research on this trend is mixed, suggesting that, at least so far,
the measurable effect of echo chambers may not be as great as some
reports have suggested. Nonetheless, there is sufficient evidence that
the trend has the potential to constrain people’s information searches
and openness to contrary views that it should be taken seriously.
Several recent studies have found evidence that exposure to the
chaotic, often ideologically segmented menu of options online increases
affinity for existing ideological views and in effect allows people to
wall themselves off from contrary views.53 Other work shows, not sur-
prisingly, that political bloggers tend to link to ideologically similar
blogs.54 Sunstein has marshaled extensive research to claim that echo
chambers are an urgent threat to democracy.55 Another study points to
the risk of “stratamentation,” a situation in which the politically active
few are locked in silos resistant to contrary information, whereas the
majority of the population consume very little political news.56 One
study examined efforts to create specially skewed search engines that
favored one political party in their search results, much as an echo
chamber would; they found voting preferences affected by as much as
20 percent.57
Pariser has written of the ultimate form of an echo chamber: An
entirely personalized information bubble for each person online, built
from the immense data about their preferences that are now available.
53 One study that demonstrated ideological convergence online is Diana C. Mutz and Paul
S. Martin, “Facilitating Communication Across Lines of Political Difference: The Role of
Mass Media,” American Political Science Review, Vol. 95, No. 1, 2001.
54 Lada Adamic and Natalie Glance, “The Political Blogosphere and the 2004 U.S. Elec-
tion: Divided They Blog,” paper presented at the Second Annual Workshop on the Weblog-
ging Ecosystem, Chiba, Japan: Association for Computing Machinery, 2005.
55 Sunstein, 2017.
56 W.Lance Bennett and Shanto Iyengar, “A New Era of Minimal Effects? The Changing
Foundations of Political Communication,” Journal of Communication, Vol. 58, No. 4, 2008.
57 O’Neil, 2016, p. 184.
The Evolving Infosphere 31
logues.61 This finding is true even of groups that are not ideologically
consistent, but echo chambers of such people could be expected to have
an even greater effect in this regard. Indeed, Sunstein quotes research
indicating that group polarization effects are materializing online.
One important study confirms these effects. Researchers
attempted to examine how the existence of a fragmented informa-
tion context offering the possibility of extreme and polarized views
would affect information search and resulting “affective polarization”:
the degree to which members of opposing political groups are viewed
in a harshly negative light. They found that, indeed, “discretionary
information search”—the ability to seek out polarized, confirming evi-
dence in a highly fragmented information environment—has the effect
of increasing such affective polarization by as much as 15 percentage
points.62 A related study found that access to broadband internet, and
thus a wider array of potential information sources (including highly
polarized ones), increases affective polarization.63 Another study found
that Facebook friends tend to highlight news stories and other infor-
mation that reinforce existing beliefs of self-selected groups.64
Yet even with such evidence, the findings of research on echo
chambers remain highly conflicted. Much research, as Sunstein indi-
cates, suggests that many people “dislike echo chambers. . . . [M]any
members of the public are keenly interested in seeing perspectives that
diverge from their own.”65 One extensive study, by economists Gentz-
61 Sunstein, 2017, pp. 68–69, 77–78. For one review of the literature, see Daniel J. Isenberg,
“Group Polarization: A Critical Review and Meta-Analysis,” Journal of Personality and Social
Psychology, Vol. 50, No. 6, 1986.
62 Richard R. Lau, David J. Andersen, Tessa M. Ditonto, Mona S. Kleinberg, and David
P. Redlawsk, “Effect of Media Environment Diversity and Advertising Tone on Information
Search, Selective Exposure, and Affective Polarization,” Political Behavior, Vol. 39, No. 1,
2017.
63 Yphtach Lelkes, Guarav Sood, and Shanto Iyengar, “The Hostile Audience: The Effect
of Access to Broadband Internet on Partisan Affect,” American Journal of Political Science,
Vol. 61, No. 1, 2017.
64 Eytan Bakshy, Solomon Messing, and Lada Adamic, “Exposure to Ideologically Diverse
News and Opinion on Facebook,” Science, Vol. 348, No. 6329, 2015.
65 Sunstein, 2017, p. 5.
The Evolving Infosphere 33
kow and Shapiro, aimed to measure the degree to which people only
visit websites they agree with, or their “isolation index.” (Conservatives
who spent all their time at Fox News would have an isolation index
of 100.) Their surveys found that self-identified conservatives have an
average isolation index online of 60.6 percent and self-identified liber-
als of 53.1 percent.66 Another study found people’s online news con-
sumption displaying “a remarkable degree of balance.”67 A major study
of the browsing habits of 50,000 individuals found that patterns did
reflect ideological gaps, but that these gaps were mostly a product of
habitual visits to established mainstream media sites. In fact, general
browsing brought them into contact with more rather than fewer dif-
fering views than offline news consumption.68
Recent polling by the Pew Research Center offers several perspec-
tives on the issue of echo chambers. One survey indicates that people
use multiple social media sites for their news, suggesting that they may
not be trapped in silos.69 Another poll found that only 9 percent of
social media users said they “often discuss, comment or post about pol-
itics of government,” indicating that online environments are mostly
nonpolitical in nature and thus might have mild effects on political
views. The same survey found that most people report their social
media friend groups contain people of differing viewpoints.70
An important 2013 study found little evidence of “defensive
avoidance” online—that is, the pattern of avoiding sites with content
that disagrees with the person’s political beliefs. People who visited
ideologically identified sites were more likely to visit mainstream sites
66 Matthew Gentzkow and Jesse M. Shapiro, “Ideological Segregation Online and Offline,”
Quarterly Journal of Economics, Vol. 126, No. 4, 2011.
67 Andrew M. Guess, Media Choice and Moderation: Evidence from Online Tracking Data,
job market paper, New York: New York University, September 6, 2016.
68
Seth Flaxman, Sharad Goel, and Justin M. Rao, “Filter Bubbles, Echo Chambers, and
Online News Consumption,” Public Opinion Quarterly, Volume 80, No. S1, 2016.
69
Elizabeth Grieco, “More Americans Are Turning to Multiple Social Media Sites for
News,” Pew Research Center, November 2, 2017.
70 Maeve Duggan and Aaron Smith, “The Political Environment on Social Media,” Pew
Research Center, October 25, 2016, p. 7.
34 The Emerging Risk of Virtual Societal Warfare
71 R. Kelly Garrett, Dustin Carnahan, and Emily K. Lynch, “A Turn Toward Avoidance?
Selective Exposure to Online Political Information, 2004–2008,” Political Behavior, Vol. 35,
No. 1, 2013, pp. 113–134.
72 R. Kelly Garrett, “Echo Chambers Online?: Politically Motivated Selective Exposure
Among Internet News Users,” Journal of Computer-Mediated Communication, Vol. 14, No. 2,
2009a. See also R. Kelly Garrett, “Politically Motivated Reinforcement Seeking: Reframing
the Selective Exposure Debate,” Journal of Communication, Vol. 59, No. 4, 2009b.
73 Norman H. Nie, Darwin W. Miller III, Saar Golde, Daniel M. Butler, and Kenneth
Winneg, “The World Wide Web and the U.S. Political News Market,” American Journal of
Political Science, Vol. 54, No. 2, 2010.
74 R. Kelly Garrett and Paul Resnick, “Resisting Political Fragmentation on the Internet,”
Daedalus, Vol. 140, No. 4, Fall 2011.
The Evolving Infosphere 35
number of hubs, such as Amazon and Yahoo, play the same function.
Pages linked to only a few other places for all purposes “do not exist”;
only through massive sharing over hubs does news spread virally.76
One possible result of a number of these intersecting trends is
that social media, where information flows are more dependent on self-
selected echo chambers and shaped by influencers, reflects a far higher
dosage of disinformation than general news consumption. One 2018
study of European news consumption, for example, found that news
sites built around fabricated or exaggerated claims garnered only a tiny
proportion of the attention of “real” news sites. No false news site had
a monthly reach of over 3.5 percent of the population, and most were
far lower, compared with between 22 percent and 51 percent reach
for major newspaper sites. Yet on Facebook, the total interaction mea-
sures “generated by a small number of false news outlets matched or
exceeded that produced by the most popular news brands.”77
76 Barabási, 2014, pp. 56–58; the reference to doctor hubs of influence is on p. 129.
77 Richard Fletcher, Alessio Cornia, Lucas Graves, and Rasmus Kleis Neilsen, Measuring
the Reach of “Fake News” and Online Disinformation in Europe, Oxford, United Kingdom:
Reuters Institute for the Study of Journalism, University of Oxford, February 2018.
The Evolving Infosphere 37
echo chambers. They are, as the scholar Whitney Phillips has put it,
“the grimacing poster children for the socially networked world.”78
The defining web collective for the trolling ethic is the site 4chan,
and in particular its infamous “/b/” board, ground zero for a roiling
series of ironic, hostile, and parodic commentary on issues, websites,
and individuals. Between 2008 and 2010, 4chan became a massive
web presence, receiving over 8 million daily page views, 200 million
visitors a month, and 400,000 daily posts.79
Trolls have now engaged in a series of famous campaigns and
attacks. In more purely ironic and humorous ways, they have staged
ongoing campaigns of harassment against what they perceive to be silly
and pointless web humor sites. The hacker collective “Anonymous” has
roots in the troll movement in particular in the 4chan community,
but eventually split off to become a more active presence in the “real
world,” organizing formal protests as well as continuing a series of dis-
ruptive hacks and trolling campaigns. They have attacked such targets
as random online forums and corporations.
In more sinister terms, trolls have banded together to launch the
most heinous forms of cyberbullying against individuals they perceive
“not to get it” and to oppose trolling ethics. In one of the more sin-
ister examples of the process, internet trolls began a practice of “RIP
trolling” in which they posted brutal and often obscene comments to
memorial sites on Facebook or other locations, even attacking the fam-
ilies of children who had died in tragic ways.
Such attacks would seem incomprehensible, but as Whitney Phil-
lips has pointed out, one of the most important aspects of the trolling
movement is its “emotional dissociation.” Many trolls undertake hostile
attacks online precisely because they make a sharp distinction between
virtual and real selves. Many, Phillips reports from her research, are in
real life quiet, thoughtful, and respectful individuals who insist they
would never engage in face-to-face attacks in the way they do online.
78 Whitney Phillips, This Is Why We Can’t Have Nice Things: Mapping the Relationship
Between Online Trolling and Mainstream Culture, Cambridge, Mass.: MIT Press, 2015, p. 8;
cf. also pp. 5–6.
79 Phillips, 2015, pp. 55–56.
38 The Emerging Risk of Virtual Societal Warfare
They “insist that their troll selves and their offline (‘real’) selves are
subject to totally different sets of rules.”80
The distinction may be morally objectionable, but it allows trolls
to justify dramatically different behavior online. And such practices
are in fact closely related to a wider online practice of dissociation and
playing with sensational content to garner attention. The idea of “click-
bait” is just a step away from “trollbait”; the idea in both cases is to gen-
erate content that attracts attention rather than offering any meaning-
ful substance. In both cases, the pattern is in part alienated individuals
or groups attacking what they perceive to be the “conventional” ethics
and institutions of the system.
Such trolling and harassment have gone global and have been
used by many international actors, both state and nonstate, for coercive
purposes. Haroon Ullah catalogs ways in which extremists have cre-
ated fake videos to undermine the credibility of political leaders, as well
as promotional videos for the radical lifestyle. He depicts the modern
battle with extremism as dominated by information channels, many of
which employ trolling tactics to harass and discredit those opposed to
the extremist message.81
what you watch, read, listen to, ask for, and eat.” Based on data col-
lected from personal phones, advertisers know “what times of day you
usually browse, watch videos, answer e-mail, travel to the office—and
what travel routes you take.” They know what sorts of hotels people
stay in on travel and what parts of the world they’re interested in visit-
ing. Such data can fuel AI-driven marketing: Armed with such precise
preference information, “[m]achines will craft ads, just as machines
will drive cars.”82
“With little notice or fanfare,” Pariser has argued, “the digital
world is fundamentally changing. What was once an anonymous
medium where anyone could be anyone . . . is now a tool for solicit-
ing and analyzing our personal data.”83 Every time someone makes a
travel reservation, looks up a word in an online dictionary, searches for
anything, makes a social media post, buys something online, or con-
ducts any activity whatsoever, companies are vacuuming up thousands
of pieces of information and assembling portraits of individuals. This
information gathering is the defining business model of most leading
internet companies, and specialist firms such as Acxiom and Cam-
bridge Analytica are compiling thousands of discrete pieces of infor-
mation about every American. This information can include names
of family and friends, loan history and purchase history, credit card
balances, pet ownership, and just about every other measurable charac-
teristic of individuals. Using such databases, private companies will be
able to monitor and predict life patterns, from our daily routine to our
purchasing preferences.84
Stolen data can become an important element of the toolkit of
social manipulators. A leading example is the theft of information from
the Office of Personnel Management, which could be used to black-
mail Americans. The alleged Chinese intrusion netted personnel files,
digital fingerprints, Social Security numbers, and much else. In a tradi-
tional cyberattack, an aggressor could use the information to “lock out
82 Ken Auletta, “How the Math Men Overthrew the Mad Men,” New Yorker, May 21, 2018.
83 Pariser, 2011, p. 6.
84 StaceyHigginbotham, “IBM Is Bringing in Watson to Conquer the Internet of Things,”
Fortune, December 15, 2015.
40 The Emerging Risk of Virtual Societal Warfare
85 Ian Brown, “Imagining a Cyber Surprise: How Might China Use Stolen OPM Records
to Target Trust?” War on the Rocks, May 22, 2018.
The Evolving Infosphere 41
1 Christopher Paul and Miriam Matthews, “The Russian ‘Firehose of Falsehood’ Pro-
paganda Model: Why It Might Work and Options to Counter It,” Santa Monica, Calif.:
RAND Corporation, PE-198-OSD, 2016.
43
44 The Emerging Risk of Virtual Societal Warfare
far more information than they can process. They are constantly in the
market for shortcuts: ways to make sense of the incoming flood, espe-
cially by judging the validity of specific facts. People use various such
techniques, including fitting incoming information into established
worldviews, accepting information from sources they believe to be reli-
able, and “going with the crowd.” These shortcuts limit the potential
effectiveness of social manipulation campaigns: Efforts to break people
away from preestablished beliefs they are seeking to bolster can be very
difficult.
Indeed, motivated reasoning—using information to support con-
clusions we have already reached rather than objectively evaluating it—
is fundamental to human ways of thinking.2 Research suggests that
there is an accuracy motivation—people want to be in possession of
“true” information—but that motivation is constantly at war with a
countervailing impulse to sustain and reinforce existing viewpoints,
which is in many circumstances the stronger motivation.3 The result is
an ongoing habit of “bounded rationality” and misperception that can
sometimes be intentionally manipulated.
This foundational nature of human cognition produces argu-
ably the most important entry point for social manipulation: Belief
and expectation create the lens through which people perceive events
and facts. Belief is, in many ways and on most occasions, more power-
ful than facts.4 This provides a potential social manipulator with raw
material, in the form of the belief systems of subgroups within a pop-
ulation, to shape with all the tools and techniques described above.
There is, as we will see, a certain degree of accuracy motivation on the
part of many people much of the time. But it is highly contingent, and
in some groups at some times it can entirely give way to preconceptions
and biases, cognitive faults that skilled social manipulators can employ
to achieve their goals.
5 Gregory R. Maio and Geoffrey Haddock, The Psychology of Attitudes and Attitude Change,
Thousand Oaks, Calif.: SAGE Publications, 2009, p. 4. For more on the “attitude” concept
in social psychology, see Richard E. Petty, Russell H. Fazio, and Pablo Briñol, eds., Attitudes:
Insights from the New Implicit Measures, New York: Psychology Press, 2008.
6 Much of social psychology of the early to mid–20th century was focused on the study of
attitudes and, more specifically, defining and measuring them; Herbert C. Kelman, “Atti-
tudes Are Alive and Well and Gainfully Employed in the Sphere of Action,” American Psy-
chologist, Vol. 29, No. 5, 1974. One of the first studies of attitudes was Gordon Allport’s
1935 work in which he defined attitude as “a mental and neural state of readiness organized
through experience, and exerting a directive influence upon the individual’s response to all
objects and situations which it is related”; Gordon Allport, “Attitudes,” in Carl Murchison,
ed., A Handbook of Social Psychology, Worcester, Mass.: Clark University Press, 1935, p. 798;
referenced in Garth S. Jowett and Victoria O’Donnell, Propaganda and Persuasion, 3rd ed.,
Thousand Oaks, Calif.: SAGE Publications, 1999, pp. 166–167.
7 This consensus emerged after a series of studies in the 1970s demonstrated the resil-
iency of fixed attitudes to change even after subjects were presented with new informa-
tion. Researchers concluded that “once formed, impressions are remarkably resilient”; Lee
D. Ross, Mark R. Lepper, Fritz Strack, and Julia Steinmetz, “Social Explanation and Social
Expectation: Effects of Real and Hypothetical Explanations on Subjective Likelihood,” Jour-
nal of Personality and Social Psychology, Vol. 35, No. 11, 1977; Charles G. Lord, Lee D. Ross,
and Mark R. Lepper, “Biased Assimilation and Attitude Polarization: The Effects of Prior
Theories on Subsequently Considered Evidence,” Journal of Personality and Social Psychology,
Vol. 37, No. 11, 1979. See also Pratkanis and Aronson, 2001, pp. 40–47.
46 The Emerging Risk of Virtual Societal Warfare
8 Brendan Nyhan and Jason Reifler, Misinformation and Fact-Checking: Research Findings
from Social Science, Washington, D.C.: New America, February 2012, p. 3; Rohini Ahluwa-
lia, “Examination of Psychological Processes Underlying Resistance to Persuasion,” Journal
of Consumer Research, Vol. 27, No. 2, 2000.
9 See Jonathan Haidt, “Moral Psychology for the Twenty-First Century,” Journal of Moral
Education, Vol. 42, No. 3, 2013; Hugo Mercier and Dan Sperber, The Enigma of Reason,
Cambridge, Mass.: Harvard University Press, 2017; and Steven Sloman and Philip Fernbach,
The Knowledge Illusion: Why We Never Think Alone, New York: Riverhead Books, 2017.
10 Nyhan and Reifler (2012, pp. 8–10) provide an overview of several studies from the
1990s to 2012 that measure whether the introduction of new information can change an
individual’s opinion on a certain policy. Important variables include the importance of the
policy position to a person’s worldview, how strongly they believe in that position or policy,
and the source providing the new information. This will be further discussed in the next
section.
Insights from Social Science 47
14 Nyhan and Reifler, 2012, p. 16. Ian Skurnik, Carolyn Yoon, Denise C. Park, and Nor-
bert Schwarz, “How Warnings About False Claims Become Recommendations,” Journal of
Consumer Research, Vol. 31, No. 4, 2005. The impact of the illusory truth effect has been
documented in both political and consumer behavior. While it was initially believed that the
effect only occurred when individuals were highly uncertain about a subject or statement,
the same study demonstrated that repetition of a statement makes it easier to cognitively
process, which makes it seem more plausible, even when people are knowledgeable about a
subject; Lisa K. Fazio, Nadia M. Brashier, B. Keith Payne, and Elizabeth J. Marsh, “Knowl-
edge Does Not Protect Against Illusory Truth,” Journal of Experimental Psychology, Vol. 144,
No. 5, 2015.
15 MattChessen, “Understanding the Psychology Behind Computational Propaganda,” in
Shawn Powers and Markos Kounalakis, eds., Can Public Diplomacy Survive the Internet?
Washington, D.C.: U.S. Advisory Commission on Public Diplomacy, 2017a, p. 21.
16 Kimberlee Weaver, Stephen M. Garcia, Norbert Schwarz, and Dale T. Miller, “Inferring
the Popularity of an Opinion from Its Familiarity: A Repetitive Voice Can Sound Like a
Chorus,” Journal of Personality and Social Psychology, Vol. 92, No. 5, 2007.
17 Paul and Matthews, 2016, p. 3.
50 The Emerging Risk of Virtual Societal Warfare
18 Social group can include one’s virtual network and physical social group; social group can
also be expanded to mean one’s political party. On this point of the importance of “social
proof,” see, for example, Flynn, Nyhan, and Reifler, 2017, p. 136.
19 RobertCialdini, Pre-Suasion: A Revolutionary Way to Influence and Persuade, New York:
Simon and Schuster, 2016, pp. 192–208.
20 Robert M. Bond, Christopher J. Fariss, Jason J. Jones, Adam D. I. Kramer, Cameron
Marlow, Jaime E. Settle, and James H. Fowler, “A 61-Million-Person Experiment in Social
Influence and Political Mobilization,” Nature, Vol. 489, 2012.
21 Joseph M. Stubbersfield, Jamshid J. Tehrani, and Emma G. Flynn, “Serial Killers, Spi-
ders and Cybersex: Social and Survival Information Bias in the Transmission of Urban Leg-
ends,” British Journal of Psychology, Vol. 106, No. 2, 2015.
Insights from Social Science 51
rose significantly when paired with images of friends who had already
voted.22
22 These dynamics are described in Alex Pentland, Social Physics: How Good Ideas Spread—
The Lessons from a New Science, New York: Penguin Press, 2014, pp. 50–61, 64–65.
23 Lord, Ross, and Lepper, 1979; Kari Edwards and Edward E. Smith, “A Disconfirmation
Bias in the Evaluation of Arguments,” Journal of Personality and Social Psychology, Vol. 71,
No. 1, 1996; Charles S. Taber and Milton Lodge, “Motivated Skepticism in the Evaluation
of Political Beliefs,” American Journal of Political Science, Vol. 50, No. 3, 2006; Stephan
Lewandowsky, Ullrich K. H. Ecker, Colleen M. Seifert, Norbert Schwarz, and John Cook,
“Misinformation and Its Correction: Continued Influence and Successful Debiasing,” Psy-
chological Science in the Public Interest, Vol. 13, No. 3, 2012, p. 112; Brendan Nyhan and
Jason Reifler, “The Roles of Information Deficits and Identity Threat in the Prevalence of
Misperceptions,” Journal of Elections, Public Opinion and Parties, 2018. A series of studies
in 2005 and 2006 found that when misinformed people were presented with corrections
in news stories that contradicted their political predispositions, they rarely changed their
minds. In fact, they often became even more strongly set in their beliefs (Nyhan and Reifler,
2012, p. 10). A similar 2007 study examined whether providing misled people with correct
information about the proportion of immigrants in the U.S. population would affect their
views on immigration. It did not. See John Sides and Jack Citrin, How Large the Huddled
Masses? The Causes and Consequences of Public Misperceptions About Immigrant Populations,
paper presented at the 65th Annual National Conference of the Midwest Political Science
Association, Chicago, 2007.
24 Nyhan and Reifler, 2012.
25 Geoffrey D. Munro and Peter H. Ditto, “Biased Assimilation, Attitude Polarization, and
Affect in Reactions to Stereotype-Relevant Scientific Information,” Personality and Social
Psychology Bulletin, Vol. 23, No. 6, 1997. As Paul Lazarsfeld and his colleagues found in
their assessment of U.S. voters, people often seek out opinion leaders who reinforce their
preexisting ideas when forming opinions. In the authors’ words, “exposure is always selective;
52 The Emerging Risk of Virtual Societal Warfare
. . . a positive relationship exists between the people’s opinions and what they chose to listen
to or read;” Paul Lazarsfeld, Bernard Berelson, and Hazel Gaudet, The People’s Choice: How
the Voter Makes Up His Mind in a Presidential Campaign, New York: Duell, Sloan, and
Pearce, 1948; referenced in Jowett and O’Donnell, 1999, p. 172. Frank Biocca, “Viewers’
Mental Models of Political Messages: Toward a Theory of the Semantic Processing of Televi-
sion,” in Frank Biocca, ed., Television and Political Advertising, Vol. I: Psychological Processes,
Hillsdale, N.J.: Lawrence Erlbaum Associates, 1991, p. 29; referenced in George C. Edwards
III, On Deaf Ears: The Limits of the Bully Pulpit, New Haven, Conn.: Yale University Press,
2006, p. 208.
26 Jack Gorman and Sara Gorman, Denying to the Grave: Why We Ignore the Facts That Will
Save Us, New York: Oxford University Press, 2016.
27 Jonas T. Kaplan, Sarah I. Gimbel, and Sam Harris, “Neural Correlates of Maintain-
ing One’s Political Beliefs in the Face of Counterevidence,” Scientific Reports, Vol. 6, Arti-
cle 39589, 2016.
28 John M. Carey, Brendan Nyhan, Benjamin Valentino, and Mingnan Liu, “An Inflated
View of the Facts? How Preferences and Predispositions Shape Conspiracy Beliefs About the
Deflategate Scandal,” Research and Politics, Vol. 3, No. 3, July-September 2016.
29 Redlawsk, Civettini, and Emmerson, 2010.
30 Lewandowsky et al., 2012, pp. 112–113.
Insights from Social Science 53
behavior change, whereas strong fear appeals with low-efficacy messages produce the great-
est levels of defensive-responses.” See Kim Witte and Mike Allen, “A Meta-Analysis of Fear
Appeals: Implications for Effective Public Health Campaigns,” Health Education and Behav-
ior, Vol. 27, No. 5, 2000, p. 591.
35 Pratkanis and Aronson, 2001, pp. 121–153.
36 James Druckman, “On the Limits of Framing Effects: Who Can Frame?” Journal of Poli-
tics, Vol. 63, No. 4, 2001; Lewandowsky et al., 2012, p. 113. In a study examining the impact
of message origin on the likelihood that a consumer will believe an advertisement’s claim,
the credibility of a source correlated with its ability to persuade; Shailendra Pratap Jain and
Steven S. Posavac, “Prepurchase Attribute Verifiability, Source Credibility, and Persuasion,”
Journal of Consumer Psychology, Vol. 11, No. 3, 2001. Consumers were more likely to believe
claims if the source was viewed as trustworthy or had experience with the product. Surveys
show that an alarming number of Americans still believe that vaccines are dangerous, even
though this theory was discredited years ago (Nyhan and Reifler, 2012, p. 7).
37 Chessen, 2017a. There is even research to support that someone is more likely to believe
information coming from someone with an easily pronounceable name (according to the
individual consuming the information); Eryn J. Newman, Mevagh Sanson, Emily K. Miller,
Adele Quigley-McBride, Jeffrey L. Foster, Daniel M. Bernstein, and Maryanne Garry,
“People with Easier to Pronounce Names Promote Truthiness of Claims,” PLOS One, Vol. 9,
No. 2, 2014; and that people are more likely to be persuaded to adopt a position by someone
they find physically attractive; Alice H. Eagly and Shelly Chaiken, “An Attribution Analysis
of the Effect of Communicator Characteristics on Opinion Change: The Case of Commu-
nicator Attractiveness,” Journal of Personality and Social Psychology, Vol. 32, No. 1, 1975.
Insights from Social Science 55
that belief perseverance and belief echoes are stronger with negative
information.38 One possible interpretation of some of these findings
is that a critical dynamic in misinformation campaigns is the relation-
ship between forewarning and the first mover advantage. If the defender
can inoculate enough people with the right preventive techniques and
information, it can deprive the information aggressor of the ability to
implant a narrative that becomes very difficult to dislodge. A central
finding is that social connections matter: People will trust information
from trusted sources, rather than placing their trust in some abstract
conception of accuracy or “truth.” One Nielsen survey, for example,
found that 83 percent of online users trust recommendations from
family and friends, compared with other studies that show collapsing
levels of trust in formal media institutions.39
This research would suggest a few basic conclusions for the util-
ity of hostile social manipulation. Such campaigns will be more effec-
tive by working within established beliefs rather than trying to change
them—to spark extreme actions by people of strong views, for example.
They will be more effective if they can employ social proof as the basis
of credibility. They will be successful if they use easy-to-grasp, graphi-
cally based presentations of information.40 And they will have greater
success the more they can repeat their message or restate the claimed
facts, achieving a presumed credibility through repetition.
What we know from the social science analysis of attitudes and atti-
tude change suggests that efforts to affect large-scale attitudes will be
38 Michael D. Cobb, Brendan Nyhan, and Jason Reifler, “Beliefs Don’t Always Persevere:
How Political Figures Are Punished When Positive Information About Them Is Discred-
ited,” Political Psychology, Vol. 34, No. 3, June 2013; Emily Thorson, “Belief Echoes: The
Persistent Effects of Corrected Misinformation,” Political Communication, Vol. 33, No. 3,
2016.
39 Todd C. Helmus and Elizabeth Bodine-Baron, “Empowering ISIS Opponents on Twit-
ter,” Santa Monica, Calif.: RAND Corporation, PE-227-RC, 2017, p. 2.
40 Lewandowsky, Ecker, and Cook, 2017, pp. 355–356.
56 The Emerging Risk of Virtual Societal Warfare
41 Lewandowsky et al., 2012, pp. 113–116; Brendan Nyhan and Jason Reifler, “Does Cor-
recting Myths About the Flu Vaccine Work? An Experimental Evaluation of the Effects of
Corrective Information,” Vaccine, Vol. 33, No. 3, 2015b; Nyhan and Reifler survey some of
the relevant literature in “Displacing Misinformation About Events: An Experimental Test
of Causal Connections,” Journal of Experimental Political Science, Vol. 2, No. 1, 2015a, p. 81;
see also Flynn, Nyhan, and Reifler, 2017, pp. 130–131.
42 There is a significant body of research to support this phenomenon: Lakoff (2014) posits
that when we negate a frame, we evoke the frame, further embedding the undesirable asso-
ciation in an individual’s mind; George Lakoff, “Mapping the Brain’s Metaphor Circuitry:
Metaphorical Thought in Everyday Reason,” Frontiers in Human Neuroscience, Vol. 8, Arti-
cle 958, 2014. Tormala and Petty (2002, 2004) demonstrated that when people resist per-
suasive attacks, they can become more certain of their initial attitudes; Zakary L. Tormala
and Richard E. Petty, “What Doesn’t Kill Me Makes Me Stronger: The Effects of Resisting
Persuasion on Attitude Certainty,” Journal of Personal and Social Psychology, Vol. 83, No. 6,
2002; Zakary L. Tormala and Richard E. Petty, “Source Credibility and Attitude Certainty:
A Metacognitive Analysis of Resistance to Persuasion,” Journal of Consumer Psychology,
Vol. 14, No. 4, 2004. Nyhan and Reifler (2012) argue that corrections can fail due to such
factors as motivated reasoning and limitations of memory and cognition, as well as identity
factors, such as race and ethnicity. One study found that when “confronted with informa-
tion compellingly debunking a preexisting belief, only a minute proportion of people—2%
of participants in one study—explicitly acknowledged their beliefs were mistaken.” Most
instead “displayed some form of motivated reasoning by counterarguing against the refuta-
tion.” See John Cook, Ullrich K. H. Ecker, and Stephan Lewandowsky, “Misinformation
and How to Correct It,” in Robert Scott and Stephan Kosslyn, eds., Emerging Trends in the
Social and Behavioral Sciences, New York: John Wiley and Sons, 2015, p. 22. See also Tan
et al., 2016. Belief perseverance appears to be more pronounced when negative information
is corrected than when positive information is corrected; Cobb, Nyhan, and Reifler, 2013.
Insights from Social Science 57
Research shows that once a piece of information is encoded, which happens quickly, it
has lingering effects on subsequent attitudes and reasoning even if the person receives new
information and changes his or her mind (Nyhan and Reifler, 2012, pp. 12–14). People tend
to continue to rely on outdated information even when it has been successfully retracted
or corrected. Dr. Emily Thorson calls this phenomenon belief echoes, and has conducted
several studies that demonstrate how exposure to a piece of negative information can influ-
ence attitudes, even if a correction is provided immediately, an individual fully accepts that
correction, and that individual’s political leanings predispose him or her to want to believe
that correction. Thorson’s research challenges the implicit assumption of well-intentioned
fact-checkers that “correction will eliminate the misinformation’s effect on attitudes,” and
suggests that even completely baseless accusations or criticism of an individual may result
in “substantial reputational damage” (Thorson, 2016, p. 476). On the effects of beliefs on
presidential elections, see also Edwards, 2006.
43 Brendan Nyhan and Jason Reifler, “When Corrections Fail: The Persistence of Politi-
cal Misperceptions,” Political Behavior, Vol. 32, No. 2, 2010; Lewandowsky et al., 2012, pp.
119–120.
44 A superb summary of the debate over the backfire effect is Daniel Engber, “LOL Some-
thing Matters,” Slate, January 3, 2018.
58 The Emerging Risk of Virtual Societal Warfare
45 Man-pui Sally Chan, Christopher R. Jones, Kathleen Hall Jamieson, and Dolores Albar-
racín, “Debunking: A Meta-Analysis of the Psychological Efficacy of Messages Countering
Misinformation,” Psychological Science, Vol. 28, No. 11, 2017.
46 Chan et al., 2017.
47 Cook, Ecker, and Lewandowsky, 2015, p. 6; Pratkanis and Aronson, 2001, pp. 332–334;
Lewandowsky et al., 2012, p. 116; Cook, Ecker, and Lewandowsky, 2017.
48 Nyhan and Reifler, 2015a, pp. 83, 90; Lewandowsky et al., 2012, pp. 117–118.
49 Cook, Ecker, and Lewandowsky, 2015, p. 6; Lewandowsky et al., 2012, pp. 116–117.
Insights from Social Science 59
50 Brendan Nyhan and Jason Reifler, “The Effect of Fact-Checking on Elites: A Field
Experiment on U.S. State Legislators,” American Journal of Political Science, Vol. 59, No. 3,
2015c.
51 There is a growing literature on this issue. See, for example, Monica Bulger and Pat-
rick Davison, The Promises, Challenges, and Futures of Media Literacy, New York: Data and
Society Research Institute, February 2018; and Jennifer Fleming, “Media Literacy, News
Literacy, or News Appreciation? A Case Study of the News Literacy Program at Stony Brook
University,” Journalism and Mass Communication Educator, Vol. 69, No. 2, 2014.
60 The Emerging Risk of Virtual Societal Warfare
52 Though the evidence is mixed, there is some research support for the idea that these vari-
ous versions of trust are linked—i.e., that low generalized social trust is associated with low
trust in institutions. See Sonja Zmerli and Ken Newton, “Social Trust and Attitudes Toward
Democracy,” Public Opinion Quarterly, Vol. 72, No. 4, 2008. See also Chan S. Suh, Paul Y.
Chang, and Yisook Lim, “Spill-Up and Spill-Over of Trust: An Extended Test of Cultural
and Institutional Theories of Trust in South Korea,” Sociological Forum, Vol. 27, No. 2,
2012.
53 Jong-sung You, “Social Trust: Fairness Matters More Than Homogeneity,” Political Psy-
chology, Vol. 33, No. 5, 2012.
54 BoRothstein and Eric M. Uslaner, “All for All: Equality, Corruption, and Social Trust,”
World Politics, Vol. 58, No. 1, 2005.
55 Natalia Letki, “Investigating the Roots of Civic Morality: Trust, Social Capital, and
Institutional Performance,” Political Behavior, Vol. 28, No. 4, 2006; Blaine G. Robbins,
“Institutional Quality and Generalized Trust: A Nonrecursive Causal Model,” Social Indica-
tors Research, Vol. 107, No. 2, 2012.
Insights from Social Science 61
this may affect not only people’s faith in those institutions, but also
their degree of social trust more broadly.
The nature of social ties and networks appears to affect levels
of trust. Some research suggests that the quality of people’s informal
social contacts influences their degree of generalized social trust.56
When people have well-developed informal networks that provide a
sense of reliable social contacts, they gain greater faith in their ability
to trust more broadly.
Comparative and cross-cultural research reveals that strong dif-
ferences in levels of trust emerge between societies. In some Scandina-
vian countries, well over half of people respond in surveys that other
people can generally be trusted, whereas in some low-trust countries
(such as Brazil and Turkey) the figure hovers near the single digits.57
Other research suggest that these differing levels of trust may even
persist across generations, among the children and grandchildren of
immigrants from these various regions.58 Some research has therefore
suggested that there is a cultural or ethnic component to social trust.
From an individual perspective, a person’s life experiences have a
significant effect on their degree of social trust.59 People learn to trust
(or not to trust) in part through the accumulation of their experiences,
including both experience with out-groups and educational and par-
enting lessons.60
Social science research also points to strong evidence for the
importance of social trust, and thus the significant dangers that could
56 JenniferL. Glanville, Matthew A. Andersson, and Pamela Paxton, “Do Social Connec-
tions Create Trust? An Examination Using New Longitudinal Data,” Social Forces, Vol. 92,
No. 2, 2013.
57 Rothstein and Uslaner, 2005, p. 42.
58 EricM. Uslaner, “Where You Stand Depends upon Where Your Grandparents Sat: The
Inheritability of Generalized Trust,” Public Opinion Quarterly, Vol. 72, No. 4, 2008.
59 Jian Huang, Henriëtte Maassen van den Brink, and Wim Groot, “College Education
and Social Trust: An Evidence-Based Study on the Causal Mechanisms,” Social Indicators
Research, Vol. 104, No. 2, 2011.
60 Jennifer
L. Glanville and Pamela Paxton, “How Do We Learn to Trust? A Confirmatory
Tetrad Analysis of the Sources of Generalized Trust,” Social Psychology Quarterly, Vol. 70,
No. 3, 2007.
62 The Emerging Risk of Virtual Societal Warfare
61 Francis Fukuyama, Trust: The Social Virtues and the Creation of Prosperity, New York:
Free Press, 1995. See also Stephen Knack and Philip Keefer, “Does Social Capital Have an
Economic Payoff? A Cross-Country Investigation,” Quarterly Journal of Economics, Vol. 112,
No. 4, 1997.
62 Christian Bjørnskov and Stefan Voigt, “Constitutional Verbosity and Social Trust,”
Public Choice, Vol. 161, No. 1/2, 2014.
Insights from Social Science 63
Emerging Technologies
65
66 The Emerging Risk of Virtual Societal Warfare
Table 4.1
Emerging Technologies of Social Manipulation and Virtual Societal
Aggression
Technology Description
Artificial Intelligence
3 CY Yam, “Emotion Detection and Recognition from Text Using Deep Learning,” Micro-
soft Developer Blog, November 29, 2015.
4 The distinction between short-term and long-term applications is made in Michael C.
Horowitz, “Artificial Intelligence, International Competition, and the Balance of Power,”
Texas National Security Review, Vol. 1, No. 3, May 2018, pp. 41–42.
Emerging Technologies 69
5 Paul Scharre and Michael C. Horowitz, Artificial Intelligence: What Every Policymaker
Needs to Know, Washington, D.C.: Center for a New American Security, June 2018, p. 4.
6 Scharre and Horowitz, 2018, p. 8.
7 Peter Stone et al., Artificial Intelligence and Life in 2030: One Hundred Year Study on Arti-
ficial Intelligence, Stanford, Calif.: Stanford University, September 2016, pp. 14–17.
70 The Emerging Risk of Virtual Societal Warfare
8 Alec Ross, The Industries of the Future, New York: Simon and Schuster, 2016, p. 28.
9 Darrell M. West, “Will Robots and AI Take Your Job? The Economic and Political Con-
sequences of Automation,” Brookings Institution, April 18, 2018.
Emerging Technologies 71
Algorithmic Decisionmaking
13 Fora discussion of these obstacles see Horowitz, 2018, p. 44. See also M. L. Cummings,
Heather M. Roff, Kenneth Cukier, Jacob Parakilas, and Hannah Bryce, Artificial Intelligence
and International Affairs: Disruption Anticipated, London: Chatham House, 2018, p. v.
14 Scharre and Horowitz, 2018, pp. 14–15.
15 Algorithmic solutions can be used as the foundation for AI applications, but simple algo-
rithms are not AI. As one definition puts it, “An algorithm is a set of instructions—a preset,
rigid, coded recipe that gets executed when it encounters a trigger. AI on the other hand—
which is an extremely broad term covering a myriad of AI specializations and subsets—is a
group of algorithms that can modify its algorithms and create new algorithms in response
to learned inputs and data as opposed to relying solely on the inputs it was designed to rec-
ognize as triggers”; Kaya Ismail, “AI vs. Algorithms: What’s the Difference?” CMS Wire,
October 26, 2018.
Emerging Technologies 73
16 Will Knight, “The Dark Secret at the Heart of AI,” MIT Technology Review, April 11,
2017. See also Kartik Hosanagar and Vivian Jair, “We Need Transparency in Algorithms,
But Too Much Can Backfire,” Harvard Business Review, July 25, 2018.
17 Frank Pasquale, The Black Box Society: The Secret Algorithms That Control Money and
Information, Cambridge, Mass.: Harvard University Press, 2015, pp. 6, 8. See also Schneier,
2018, pp. 82–87.
18 O’Neil, 2016, p. 173.
74 The Emerging Risk of Virtual Societal Warfare
a place holder for the real patient who is not in the bed but in the
computer. That virtual entity gets all our attention.” He cites stud-
ies showing that doctors already spend more than twice as much time
reviewing electronic medical records as they do in personal contact
with patients.19 Over time, this system will advance to the use of algo-
rithms to make decisions independently of doctors. Mount Sinai Hos-
pital in New York has used algorithmic analysis to process data in
patient records and forecast disease.20 Medical chatbots are becoming
more common and have the potential to become the default interface
between many people, especially those at the lower end of the socioeco-
nomic spectrum, and the medical community.21
These tools pose several risks. First, to the extent that deep learn-
ing processes allow computers to discover patterns on their own, their
programmers simply do not know how the algorithms actually work.22
The result is that algorithmic decisionmaking “in human resources,
health, and banking, just to name a few,” is “quickly establishing broad
norms that exert upon us something very close to the power of law. If
a bank’s model of a high-risk borrower, for example, is applied to you,
the world will treat you as just that, a deadbeat—even if you’re horribly
misunderstood.”23 People will be rejected for loans and jobs for reasons
they do not understand; more broadly, the societal pattern that could
result might become a “rule of scores.”24 One example is the area of
hiring, where personality tests are now being used in “60 to 70 percent
of prospective workers in the United States.”25
19 Abraham Verghese, “How Tech Can Turn Doctors into Clerical Workers,” New York
Times Magazine, May 16, 2018.
20 Knight, 2017.
21 Douglas Heaven, “Dr. Bot Will See You Now,” MIT Technology Review, November/
December 2018, p. 22.
22 Knight, 2017.
23 O’Neil, 2016, pp. 29, 51.
24 Pasquale, 2015, p. 191.
25 O’Neil, 2016, p. 108.
Emerging Technologies 75
28
Alex Aharanov, “What Is the Future of Augmented and Virtual Reality?” Jabil Blog,
March 27, 2018.
29 Ashley Rodriguez, “In Five Years, VR Could Be as Big in the US as Netflix,” Quartz,
June 6, 2018.
30 Peter Rubin, “You’ll Go to Work in Virtual Reality,” Wired, June 2018, p. 61.
31 Jayson DeMers, “7 New Technologies Shaping Online Marketing for the Better (We
Hope),” Forbes, August 15, 2016.
32 Alfred Ng, “VR Systems Oculus Rift, HTC Vive May Be Vulnerable to Hacks,” CNET,
April 17, 2018.
33 Andrew J. Andrasik, Hacking Humans: The Evolving Paradigm with Virtual Reality, SANS
Institute, Information Security Reading Room, November 2017. See also Eric E. Sabelman
and Roger Lam, “The Real-Life Dangers of Augmented Reality,” IEEE Spectrum, June 23,
2015.
Emerging Technologies 77
could find ways to send messages subliminally, seeding users with cer-
tain views or disruptions. Broadly speaking, VR and AR offer whole
virtual worlds that manipulators can hack and modify to achieve their
desired goals.
Internet of Things
34 Global Agenda Council on the Future of Software and Society, Deep Shift: Technology
Tipping Points and Social Impact, Cologny, Switzerland: World Economic Forum, 2015, p. 8.
78 The Emerging Risk of Virtual Societal Warfare
35 Codrin Arsene, “IoT Ideas That Will Soon Revolutionize Our World in 8 Ways,”
Y Media Labs, November 24, 2016.
36 Global Agenda Council on the Future of Software and Society, 2015, pp. 7–8.
37 Global Agenda Council on the Future of Software and Society, 2015, p. 7.
Emerging Technologies 79
Voice-Enabled Interfaces
40
Will Knight, “Hordes of Research Robots Could Be Hijacked for Fun and Sabotage,”
MIT Technology Review, July 24, 2018c.
41 Adam Segal, The Hacked World Order: How Nations Fight, Trade, Maneuver, and Manip-
ulate in the Digital Age, New York: PublicAffairs, 2016, pp. 99–101.
Emerging Technologies 81
42 Sarah Perez, “Voice-Enabled Smart Speakers to Reach 55% of U.S. Households by 2022,
Says Report,” TechCrunch, November 8, 2017. See also J. Walter Thompson Intelligence,
Speak Easy, New York, June 2017.
43 Clark Boyd, “The Past, Present and Future of Speech Recognition Technology,” The
Startup, January 10, 2018.
44 See Yaniv Leviathan, “Google Duplex: An AI System for Accomplishing Real-World
Tasks Over the Phone,” Google AI Blog, May 8, 2018.
82 The Emerging Risk of Virtual Societal Warfare
ing, and the task has proven exceedingly difficult, with AI unable to
distinguish basic common-sense implications of people’s requests.45
A standing technology challenge to create a chatbot that can fool a
panel of human judges—by making them believe they are interact-
ing with a real person—has not yet come especially close to success.
Yet people report significant degrees of emotional attachment even to
the relatively primitive chatbots that exist today, and chatbots already
dispense medical and legal advice and serve as the initial gateway to
many customer service interactions.46 As the technology improves, it
will be paired with voice-enabled interfaces to create a fundamentally
new interactive experience.
Increasingly, these systems are achieving the ability to sense peo-
ple’s emotional states. Amazon is developing voice analysis capabili-
ties to allow its Alexa concierge to understand the speaker’s emotional
state. SONOS has “filed a patent for technology that could customize
playlists based on the emotion in [a listener’s] voice or biometric data
obtained from a wearable device, like perspiration or heart rate, all
cross-referenced” with the listener’s music history.47
Over time, these portals are likely to shift from a distinct cat-
egory of curious (and often inaccurate) computerized ask-and-response
technologies to become the basic portal through which people conduct
most of their interactions with the larger technology systems around
them. (Industry leaders have begun to refer to this trend as the creation
of “ambient computing.”48) Many activities, including dictating term
papers, making internet searches, ordering take-out, setting thermo-
stats, dictating the parameters of diet (and the orders sent to grocery
stores), and having dialogues about medical diagnoses, will take place
49 “Blockchain and the Internet of Things: The IoT Blockchain Opportunity and Chal-
lenge,” i-SCOOP, September 2016 (updated February 2018).
50 Michael J. Casey and Paul Vigna, “In Blockchain We Trust,” MIT Technology Review,
April 9, 2018.
84 The Emerging Risk of Virtual Societal Warfare
51 Ross, 2017, pp. 98–103; Global Agenda Council on the Future of Software and Society,
2015, p. 24.
52 Global Agenda Council on the Future of Software and Society, 2015, p. 26.
Emerging Technologies 85
Fabricated Audio
Computer-generated audio is not a new phenomenon. There are a wide
variety of services, from online translators to applications on mobile
phones, that provide computer-generated audio by linking a collection
of short recorded speech fragments to create a sentence.53
Digital voices like these however, are currently limited by the
sentence fragments memorized and the absence of natural tendencies
found in speech, such as emotion and sentence flow, often leading
the voice to sound a bit synthetic.54 This limitation makes discerning
between the audio of a real person versus that produced by a com-
puter simpler. In comparison, audio generation technology that is cur-
rently being developed works differently, using recent breakthroughs
in neural networks to learn and mimic the properties of human speech
in an effort to create a more natural-sounding voice.55 In addition to
improving the overall quality of computer-generated speech, technol-
ogy is under development to replicate any given individual’s voice from
scratch, which could be used to make anyone say anything that the
manipulators want. One example of this software is being developed
by Adobe, which in 2016 presented a technology demo of a software
called VoCo. Referred to as the Photoshop of audio, the software
53 “Fake News: You Ain’t Seen Nothing Yet,” The Economist, July 1, 2017.
54 “Fake News: You Ain’t Seen Nothing Yet,” 2017.
55 Avi Selk, “This Audio Clip of a Robot as Trump May Prelude a Future of Fake Human
Voices,” Washington Post, May 3, 2017.
86 The Emerging Risk of Virtual Societal Warfare
56 “Let’s Get Experimental: Behind the Adobe MAX Sneaks,” Adobe Blog, November 4,
2016.
57 Adobe Inc., “#VoCo. Adobe MAX 2016 (Sneak Peeks),” November 4, 2016.
58 Bahar Gholipour, “New AI Tech Can Mimic Any Voice,” Scientific American, May 2,
2017.
59 Gholipour, 2017.
60 Lyrebird, “With Great Innovation Comes Great Responsibility,” undated.
61 “Imitating People’s Speech Patterns Precisely Could Bring Trouble,” The Economist,
April 20, 2017. See VivoText’s website for more detail on its emerging products.
Emerging Technologies 87
62
Aäron van den Oord, Tom Walters, and Trevor Strohman, “WaveNet Launches in the
Google Assistant,” DeepMind Blog, October 4, 2017.
63 Gholipour, 2017.
64 Rob Price, “AI and CGI Will Transform Information Warfare, Boost Hoaxes, and Esca-
late Revenge Porn,” Business Insider, August 12, 2017.
65 Gholipour, 2017.
88 The Emerging Risk of Virtual Societal Warfare
66 Lee Ferran, “Beware the Coming Crisis of ‘Deep Fake News,’” RealClearLife, July 27,
2018. See also the discussion at the Heritage Foundation, “Deep Fakes: A Looming Chal-
lenge for Privacy, Democracy, and National Security,” panel discussion, Washington, D.C.,
July 19, 2018.
67 “Fake News: You Ain’t Seen Nothing Yet,” 2017. See also Ferran, 2018.
68 Faizan Shaikh, “Introductory Guide to Generative Adversarial Networks (GANs) and
Their Promise!” Analytics Vidhya, June 15, 2017.
69 Greg Allen and Taniel Chan, Artificial Intelligence and National Security, Cambridge,
Mass.: Belfer Center for Science and International Affairs, Harvard Kennedy School, July
2017, p. 29. See also Will Knight, “These Incredibly Realistic Fake Faces Show How Algo-
rithms Can Now Mess with Us,” MIT Technology Review, December 14, 2018e.
70 “Fake News: You Ain’t Seen Nothing Yet,” 2017.
Emerging Technologies 89
71 Tero Karras, Timo Aila, Samuli Laine, and Jaakko Lehtinen, “Progressive Growing of
GANs for Improved Quality, Stability, and Variation,” Sixth International Conference on
Learning Representations, Conference Proceedings, Vancouver, Canada, 2018; Hillary Grigo-
nis, “A.I. Creates Some of the Most Realistic Computer-Generated Images of People Yet,”
Digital Trends, October 30, 2017.
72 Karras et al., 2018.
73 Han Zhang, Tao Xu, Hongsheng Li, Shaoting Zhang, Xiaogang Wang, Xiaolei Huang,
and Dimitris Metaxas, “StackGAN: Text to Photo-Realistic Image Synthesis with Stacked
Generative Adversarial Networks,” IEEE International Conference on Computer Vision, Con-
ference Proceedings, Venice, Italy: Institute of Electrical and Electronics Engineers, October
2017, pp. 8–14.
74 James Vincent, “Artificial Intelligence Is Going to Make It Easier Than Ever to Fake
Images and Video,” The Verge, December 20, 2016.
90 The Emerging Risk of Virtual Societal Warfare
75 Adario
Strange, “Face-Tracking Software Lets You Make Anyone Say Anything in Real
Time,” Mashable, March 20, 2016.
76 Jutus Thies, Michael Zollhöfer, Marc Stamminger, Christian Theobalt, and Matthias
Nießner, “Face2Face: Real-Time Face Capture and Reenactment of RGB Videos,” IEEE
Conference on Computer Vision and Pattern Recognition, Conference Proceedings, Las Vegas,
Nev.: Institute of Electrical and Electronics Engineers, 2016.
77 Jennifer Langston, “Lip-Syncing Obama: New Tools Turn Audio Clips into Realistic
Video,” UW News, July 11, 2017.
78 Langston, 2017.
79 Yitong Li, Martin Renqiang Min, Dinghan Shen, David Carlson, and Lawrence Carin,
“Video Generation from Text,” Thirty-Second AAAI Conference on Artificial Intelligence,
Conference Proceedings, New Orleans, La.: Association for the Advancement of Artificial
Intelligence, February 2018, pp. 3–6.
80 Li et al., 2018, pp. 5–8.
81 Haoye Cai, Chunyan Bai, Yu-Wing Tai, and Chi-Keung Tang, “Deep Video Generation,
Prediction and Completion of Human Action Sequences,” European Conference on Computer
Vision 2018, Conference Proceedings, Munich, Germany, September 2018, p. 8.
Emerging Technologies 91
82 Vincent, 2016.
83 “Fake News: You Ain’t Seen Nothing Yet,” 2017.
84 Quoted in Joshua Rothman, “Afterimage,” New Yorker, November 12, 2018, p. 40.
92 The Emerging Risk of Virtual Societal Warfare
forms and VR and AR spaces well enough that we will not be able to
distinguish real from CGI beings.85
Surveillance Systems
Almost every new technology linked to the cloud or IoT will be able
to serve some kind of surveillance or “sousveillance” purpose by some-
time between 2030 and 2035. (Sousveillance refers to the capturing
of activity from “below,” in such ways as capturing data on people’s
internet habits, recording their movements through cellphone loca-
tions, recording of audio by home-based sound-activated assistants,
and more.) Visibility and oversight of a person’s or group’s activity will
necessarily involve monitoring their movements, communications,
transactions, and behaviors. Indeed, what is arising is a sort of ambient
surveillance system in which vast troves of data are being gathered and
processed and theoretically available to governments and other actors,
in ways theoretically designed for helpful or commercial purposes but
which nonetheless reflect a degree of data collection on individuals
that would have been unthinkable for even the most authoritarian gov-
ernment just a generation ago. At the same time, the advance of AI is
creating a demand for ever-increasing data collection to provide the
essential fuel for training these systems, producing additional finan-
cial incentives for vacuuming up huge swaths of information about
Americans.
By 2030, motion-magnifying and microscope software (called
Eulerian Video Magnification [EVM]) will be standard in surveillance
and smartphone cameras and video recordings; these tools reveal details
that would otherwise be invisible (e.g., tiny movements in eye muscles,
blood flow, microscopic movements in seemingly stable objects). There
will also be at least 1 billion drones on earth, most of which will carry
85 Jesus Diaz, “The Weird, Wild Future of CGI,” Fast Company, October 19, 2017.
Emerging Technologies 93
86 Hao-Yu Wu, Michael Rubinstein, Eugene Shih, John Guttag, Frédo Durand, and Wil-
liam Freeman, “Eulerian Video Magnification for Revealing Subtle Changes in the World,”
ACM Transactions on Graphics, Vol. 31, No. 4, 2012.
87 Gareth Evans, “Robotic Insects Add Whole New Meaning to ‘Fly-on-the-Wall’ Surveil-
lance,” Army Technology, March 16, 2015.
88 Global Agenda Council on the Future of Software and Society, 2015, p. 7.
89 Tiku, 2018, p. 55.
94 The Emerging Risk of Virtual Societal Warfare
90 Ido Kilovaty, “Doxfare: Politically Motivated Leaks and the Future of the Norm on Non-
Intervention in the Era of Weaponized Information,” Harvard National Security Journal,
Vol. 9, No. 1, 2018.
Emerging Technologies 95
1 For a description of the ways in which multiple autocratic regimes are using these tech-
nologies for domestic and international influence, see Juan Pablo Cardenal, Jacek Kucharc-
zyk, Grigorij Mesežnikov, and Gabriela Pleschová, Sharp Power: Rising Authoritarian Influ-
ence, Washington, D.C.: National Endowment for Democracy, December 2017.
97
98 The Emerging Risk of Virtual Societal Warfare
part to bring to life a series of images about the unfolding future of the
infospheres of advanced democracies in narrative ways.
These futures are not predictions. In outlining them, we are not
suggesting that the specific combination of factors reflected in any one
of these futures is likely to emerge as described. They are illustrative and
suggestive, designed to provoke discussion about the potential future
trajectory of hostile social manipulation and virtual societal warfare.
But each is deeply grounded in research on the changing infosphere,
the social science of influence and trust, and the character of emerg-
ing technologies. The scenarios are designed to illustrate three possible
dangers inherent in emerging technologies. There are some contrary
trends under way—such as the growing, if still incomplete, efforts by
social media platforms to counteract fabricated information—that are
likely to moderate some of these outcomes. But the scenarios described
below remain plausible given emerging technologies and if the govern-
ments and institutions in many nations do not actively work to shape
the character of the evolving infosphere.
Furthermore, these three futures are not presented as mutually
exclusive possible outcomes. Each is built around a single leading vari-
able drawn from trends in the infosphere: One focuses on the decline
of any firm and objective sense of the distinction between real facts
and events and fabricated or incorrect ones; one focuses on the rise of
echo chambers and the collapse into a fragmented information real-
ity of self-reinforcing silos of belief and information; and one focuses
on the growing role of algorithmic decisionmaking. Any actual future
is likely to involve some degree of all of these main drivers, and they
overlap to some degree; a world of information silos is also a world in
which the objective sense of reality has faded. By focusing on each in
turn, the futures allow us to investigate various aspects of an emerging
infosphere in detail.
Finally, we have proposed that these futures could arrive between
the years 2023 and 2028. The dates chosen are somewhat arbitrary,
though they have been assigned these dates based on how close the
infosphere appears to be, in terms of technology and structure, to these
futures. But the dates are not meant as predictive.
Future 1: The Death of Reality (2025 Scenario) 99
In this sense, more of human life has come to reflect the artis-
tic, the created, and the imagined rather than the literally “true”—the
Hollywoodization of life, in a much more ontological sense rather than
the cultural sense that was once imagined.3 (It was Picasso who said
that “[a]rt is not truth; art is a lie that enables us to recognize truth.”4
He could not have imagined how seriously many in the entertainment
business would take that sentiment.) The story of the late 20th and
early 21st centuries was already one of a declining barrier between true
and fake, real and unreal. Several powerful technologies have now
accelerated that blending in dramatic ways.
These emerging patterns have been fueled in part by an uncom-
fortable but unavoidable truth: Human beings favor comfort and plea-
sure over facts. “Only to a limited extent does man want truth,” Fried-
rich Nietzsche argued in a prescient forecast of the current trends in
reality-bending. “He desires the pleasant, life-preserving consequences
of truth; to pure knowledge without consequences he is indifferent, to
potentially harmful and destructive truths he is even hostile.” Human
consciousness, Nietzsche believed, is “an apparatus for abstraction and
simplification—designed not for knowledge but for gaining control of
things.”5
Over the last few decades, corporations, news media, and the
entertainment industry increasingly have decided to accept and work
within this basic aspect of human nature rather than fight it. They
are giving people what they want, and at a time of social disruption,
slowing growth, rising inequality, and alienation in the face of increas-
ingly impenetrable institutions, what they want is perceived, more
often than not, to be sensational, extreme, and targeted against some
sort of out-group that allows the audience to deepen its sense of social
membership.
3 Stacy Schiff, “The Interactive Truth,” New York Times, June 15, 2005.
4 David Shields, Reality Hunger: A Manifesto, New York: Knopf, 2010, pp. 14, 32, 34–35,
40–42.
5 Friedrich Nietzsche, On Truth and Untruth: Selected Writings, trans. and ed. Taylor
Carman, New York: Harper Perennial, 2010, pp. 24, 121.
102 The Emerging Risk of Virtual Societal Warfare
6 Regina Joseph, “A Peek into the Future: A Stealth Revolution by Influence’s New Mas-
ters,” in Weston Aviles and Sarah Canna, eds., White Paper on Influence in an Age of Rising
Connectedness, Washington, D.C.: U.S. Department of Defense, August 2017, p. 11.
Future 1: The Death of Reality (2025 Scenario) 103
7 Tom Nichols, “How America Lost Faith in Expertise,” Foreign Affairs, March-April 2017b.
The death of reality has been midwifed by the explosion, over the past
five to seven years, of astonishing developments in a critical area of
technology: lifelike audio and video and the associated technologies
of fakery that continue to blur the boundaries between “true” (verifi-
able) and invented content.11 People have been making replicas and
fake content for decades, but the difference now is that these fakes,
whether images, audio clips, or videos, are of such high fidelity that
they make it almost impossible for anyone to verify their authenticity
without time-consuming and, in some cases, very extensive investiga-
tion.12 The same technologies, combined with advanced robotics and
AI, are just beginning to cross the line into the long-awaited future
of humanlike robots, though even now that threshold remains to be
crossed in a meaningful way.
The first to come along was simple image creation. The very term
“Photoshop” became a universal term to denote the faking or modifi-
cation of a still image. In 2025, the programs, some driven by machine-
learning processes, are so good that would-be fakers can assemble
images—by voice command—of just about anything they want.13 As
long ago as 2015, the website Reddit was hosting competitions in image
10 See the discussion by Simon Sinek in National Public Radio, “Trust and Consequences,”
TED Radio Hour, May 15, 2015.
11 “Fake News: You Ain’t Seen Nothing Yet,” 2017.
12 Kevin Roose, “Here Come the Fake Videos, Too,” New York Times, March 4, 2018.
13 Vincent, 2016; Karras et al., 2018; Grigonis, 2017; Zhang et al., 2017.
Future 1: The Death of Reality (2025 Scenario) 105
14 Cade Metz and Keith Collins, “How an AI ‘Cat-and-Mouse Game’ Generates Believable
Fake Photos,” New York Times, January 2, 2018.
15 Selk, 2017.
16 “Let’s Get Experimental: Behind the Adobe MAX Sneaks,” 2016; Adobe Inc., 2016;
“Imitating People’s Speech Patterns Precisely Could Bring Trouble,” 2017.
106 The Emerging Risk of Virtual Societal Warfare
17 Price, 2017.
18 Gholipour, 2017; Lyrebird, undated.
19 Strange, 2016; Thies et al., 2016; Langston, 2017.
20 Li et al., 2018; Cai et al., 2018.
Future 1: The Death of Reality (2025 Scenario) 107
23 Matt Chessen, The MADCOM Future, Washington, D.C.: Atlantic Council, 2017b.
24 Jazz listeners held out for years, rejecting the artificiality inherent in the digital genera-
tion of music, until 2023, with the release of “Waterfront Lullaby,” an album of avant-garde
jazz first attributed to a human group called “The Fusion Collective” and quickly declared
by The Jazz Review to be “a new classic, the greatest expression of the genre in a decade if not
more—and yet more proof, if anyone needs it, that we need human intuition, craft, and, yes,
soul behind the creation of true jazz.” When “The Fusion Collective” was revealed to be a
Future 1: The Death of Reality (2025 Scenario) 109
all print ads are AI-generated. In 2024, the first theatrically released
major motion picture emerged that had been created entirely inside
a computer. Appropriately enough, it was a science fiction tale about
a future society dependent on a massive central information nervous
system. It starred a combination of images of real actors (the rights to
which were purchased) and several entirely fictional digital creations.
It cost about $18 million to make—ten cents on the dollar to the cost
of actual, physical films.25
Efforts to counter these technologies, and to re-establish a stron-
ger sense that people can distinguish real images from fake, have been
under way for some years. The U.S. Department of Defense sponsored
work that used AI to help assess images and call out fabricated video.26
These systems provided some degree of accuracy in catching early-
generation video fabrications by using simple rules of thumb, such as
searching for videos in which the people do not blink. But forgers,
some state-sponsored, have moved well past earlier versions of easily
detectable fakery to far more sophisticated, AI-supported techniques.
And while existing software filters could theoretically help determine
accuracy, most people simply do not use them, and faked video and
audio files that agree with the pre-established beliefs of segments of
the population are widely accepted even though easily debunked. The
pattern with such fakes thus has continued the trend established with
basic facts: Invalid claims that could and should be countered through
straightforward correction can still persuade significant proportions of
the population.
These technologies of image, audio, and visual fakery have evolved
in parallel and have largely now merged, with dramatic advances in
27 David Pierce, “Enjoy Your New Virtual Office,” Wired, February 2018, p. 43.
28 Graham Roberts, “Augmented Reality: How We’ll Bring the News into Your Home,”
New York Times, February 1, 2018.
29 Lauren Smiley, “Something to Watch Over Me,” Wired, January 2018.
Future 1: The Death of Reality (2025 Scenario) 111
The result of all this has been the decline of public adherence to any-
thing like a generally accepted set of facts on major social issues.
Not all “facts” are disappearing or have become irrelevant. Cer-
tain prosaic, scientifically based, and personally attested facts persist.
Bacteria are still understood to cause infections and are treated with
antibiotics. Engineers are still able to calculate the support require-
ments for a new home or bridge.
Yet even in the realm of supposedly scientific assessments of truth,
more and more issues are contended. The debates over climate change
and vaccines are perhaps the leading examples, with most Americans
no longer confident of any meaningful reality in either area outside the
perceptions of their echo chambers. This trend has spread to other issues
on which there ought to be some basis of objective agreement, such as
the level of crime in society, the health threat posed by recreational
drugs, and the reliability of treatments for various diseases. In these
and other cases, shadowy campaigns have employed multiple means of
disinformation and fakery to undermine any potential for consensus.
The result has been to paralyze government policy responses.
The challenge is even more intense for major social and political
issues that are in contention, as the basis of objective information that
people can use to resolve and render judgments about them is called
into question. The real trend is therefore more specific than the death
of truth: The problem now appears to be that the meaning of major
social issues and events is becoming increasingly indeterminate. Whether
on questions about vaccinations, climate change, crime, the effects of
certain economic policies, or a hundred other issues, it is increasingly
impossible to settle on any agreed-upon truths to support general judg-
ments among the body politic.
As much as a decline in concern for truth, the era represents an
explosion of the sort of cynical, hard-edged irony that has long been
resident in the 4chan community and other online communities that
refuse to take anything seriously. Increasingly, the online aesthetic is
one of parody, pranksterism, and a comprehensive spirit of noncon-
formist mockery designed to tear down established values and insti-
112 The Emerging Risk of Virtual Societal Warfare
30 Fora discussion of the trend as of 2017, see Angela Nagle, Kill All Normies: Online Cul-
ture Wars from 4chan and Tumblr to Trump and the Alt-Right, Winchester, United Kingdom:
Zero Books, 2017, especially pp. 5–7 and 28–29.
31 Virginia Heffernan, “Twilight of the Hackers,” Wired, February 2018, p. 14.
32 Shields, 2010, p. 25.
Future 1: The Death of Reality (2025 Scenario) 113
33 John A. Gans., Jr., “Governing Fantasyland,” Survival, Vol. 60, No. 3, June-July 2018,
p. 200.
114 The Emerging Risk of Virtual Societal Warfare
misdeeds. It was up for two days before the politician got wind of it
and brought it to the attention of the paper’s editors.)
Efforts to shape perceived reality through social media have
been turbocharged in recent years. AI-driven bots now dominate the
competition on these platforms; they can detect countervailing posts
and argue in response in ways largely indistinguishable from human
beings. Combined with massive data theft and the establishment of
Facebook- or Google-sized databases on individual Americans (and to
a lesser extent Europeans) by hostile powers, an information aggressor
in Moscow or Beijing can now target ads, messages, and information as
precisely as any platform company. Recent reports suggest that China
has devoted tens of thousands of highly trained, social media–savvy,
English-speaking trolls to a persistent, ongoing campaign with specific
goals in terms of shifting perceptions and achieving specific behavioral
outcomes in the U.S. public.
Often the goals of such campaigns are not to make people
believe a certain narrative as much as to feed existing belief systems
and prompt alienation, outrage, and conflict. Aggressors are fab-
ricating videos designed to appeal to the most paranoid instincts of
fringe groups and to harden political polarization generally. Republi-
cans receive a constant dose of videos showing Democratic politicians
saying and doing awful things; Democrats receive similar videos of
Republican politicians.
Beyond simply affecting information accuracy and flow, infor-
mation aggressors are increasingly experimenting with ways to grab
control of parts of the increasingly created reality that confronts
Americans in their daily lives. There are verified cases of manipulators
hacking into chatbots that serve military posttraumatic stress disorder
(PTSD) patients and giving counterproductive advice. In a few cases,
it appears that sophisticated hackers have begun to alter VR streams,
slightly changing the events in videogames being played on VR head-
sets; hacking into VR-hosted workplace discussions to implant provoc-
ative audio; or adding disturbing images to the AR apps on cellphones.
An increasingly common tactic appears to be the insertion of sublimi-
nal messages into any VR or AR streams they can access.
Future 1: The Death of Reality (2025 Scenario) 115
It is 2024, and people have retreated ever further into closed, mutually
suspicious information environments. Multiple groups of people in the
same country are now essentially living in different realities.
Over time, the ability and inclination of people to closet them-
selves off into tightly self-referential communities of knowledge has
only accelerated. The outlets that served as sources of shared fact and
meaning, such as network news, major daily newspapers, and a bipar-
tisan core of political leaders, have continued to give way to more dis-
crete, bespoke, and often partisan sources of information. This shift
has been partly a function of changing economics and the continued
atrophying of respected information filters. In one recent example of
this trend, Washington Post owner Jeff Bezos declared that the newspa-
per’s business model was “irretrievable” and he could not continue to
subsidize its operations with tens of millions of dollars per year. The
paper’s staff has developed a plan to transition to an exclusively online
format and to shrink the size of the reporting staff, resulting in fewer
stories and a stream of just one or two investigative reports per month.
But the paper’s editors also made a fateful decision that reflects
the spirit of the age: After several years of a largely failed appeal to a
general readership, this famed major daily newspaper is now billing
itself as the “essential guide for the thoughtful progressive.” It is explic-
itly appealing to a specific demographic, trying to make itself the dom-
inant player in one mega-silo rather than drawing people together from
various belief groupings. This practice is now the norm for content cre-
117
118 The Emerging Risk of Virtual Societal Warfare
ators of every variety. The name of the game is to find a silo and domi-
nate it—with a product, a channel of information, or a perspective—
rather than gain broad appeal.
These trends and this future began with the best of intentions.
People were seeking out those with similar interests for discussion and
shared ideas, whether regarding their hobby, their profession, or issues
of concern. They sought to align with those who had similar ideas in
order to marshal action on issues of shared importance. Such virtual
communities have proved important in shoring up people’s sense of
belonging and ontological security in an era of massive, often homoge-
nized societies, which carry the risk of submerging individual identities
in abstract patterns and trends. By allowing people to seek out simi-
lar individuals, social media platforms and other forms of engagement
offered an important psychological reassurance.
They also offered another form of empowering individual expres-
sion. With these capabilities, people could leapfrog the constraints
of their immediate surroundings and connect with people of similar
belief, experience, preference, or other shared identities or views. Those
facing bias or repression could connect with others in a comparable sit-
uation and be strengthened and supported. People whose beliefs have
been repressed by their community could find support through these
means.
The siloization of the information space is, in part, the natural
consequence of an era of individualization. Marketing campaigns
are now almost entirely personalized; the ads one person gets, even
down to their specific claims, are customized based on thousands of
data points available to the advertisers. The idea of appealing to broad
swaths of the population in one massive appeal now seems as primitive
as a 1950s cigarette advertisement. Marketers today are pushing but-
tons on AI-driven engines that scoop up troves of information from
databases and reach out to small groups or specific individuals with
highly tailored messages.
But the challenge, as with many trends associated with the
modern infosphere, has been to keep such a capability from running
out of control and being expressed in ways that are socially destructive
rather than individually empowering. The fragmentation of the mar-
Future 2: Silos of Belief (2024 Scenario) 119
A Fragmented Society
tion silos that allow citizens to believe essentially whatever they want to
believe and constantly discredit those who believe anything else. The
emergence of these powerful information bubbles has deepened the
polarization that was already under way and that made it impossible to
enact any meaningful reforms.
In 2023, as just one example, the latest in a series of entitlement
program reforms—designed to address the ballooning federal deficit
and debt—collapsed before it even came to the floor of the House
or Senate for a vote, irrevocably shattered by the competing grass-
roots campaigns of a handful of social media information entrepre-
neurs spreading wild rumors and conspiracy theories within carefully
selected silos about what the legislation would produce. Fundraising
for charity is increasingly based on silos (asking for donations to help
others in such narrowly defined groups), which are not always related
to ethnicity, race, nationality, profession, or interests, but increasingly
appeal to a person’s identity as the member of a specific group. National
bipartisan consensus on major policy issues has been, in a sense, over-
taken by social events: It simply no longer exists, or can exist, in any
measurable way. Political leaders hopeful of making national progress
on any issue must now knit together bits and pieces of agreement from
dozens of silos.
A common theme governing many of these silos is that the slices
of information pouring into people’s realities have become highly sen-
sationalistic: violent, full of sexual content, and built on gossip and
innuendo. Over time, information marketers have discovered the pre-
cise human information appetites that most attract attention. Different
forms of information and content are tastier than others to the human
brain; we have a neurobiological predilection for the extreme and the
titillating.1 Some people have become trapped in self-reinforcing silos
of such content, and thousands have suffered significant emotional
trauma and psychological injuries as a result. Therapeutic treatment
centers for the “information traumatized” have sprung up across the
country.
Over time, these increasingly suspicious silos, along with broader social
trends of fragmentation and ideological polarization, have produced
an increasingly hostile series of aggressive actions between opposing
groups. These actions have included trolling, cyberbullying, iden-
tity theft, distributed denial-of-service (DDoS) attacks, and floods
of emails and prompts that overwhelm a computer or system. These
wars—though among virtual communities—pit states against states,
states against nonstate actors, and networks of nonstate groups against
similar networks. Billionaires fund information to reinforce the beliefs
of particular silos, and they help launch information wars—sometimes
aimed at persuasion, often morphing into vicious cyberbullying—on
others.
The wars also focus on debates over facts. Conservative groups
skeptical of global warming funnel a constant stream of often mis-
leading information into receptive bubbles, as do some progressive
groups that are anti-vaccine or anti–genetically modified organisms
(GMOs). Wealthy individuals or political action committees can
simply decide that they want to believe something and, if that claim
roughly agrees with preexisting ideological convictions, they can gen-
erate information—usually a mix of real and fabricated—to foment
such beliefs.
Because of the fragmentation of the infosphere, these campaigns
have limits and can never achieve majority support. But the nature of
a fragmented society is such that they do not have to. A major lesson
of the last two decades is that, in a highly polarized situation in which
two political parties are in a gridlock, mobilizing even 30 percent of
the population against a major policy idea is enough to kill it. This is
especially true if most of that 30 percent lies within the base constitu-
encies of the two parties, and thus strike special fear into the hearts
of elected officials worried about primary challenges. In the past,
when such minorities opposed policy ideas—civil rights reforms, for
example—a larger bipartisan majority was available to override the
political obstruction. That is no longer the case.
Future 2: Silos of Belief (2024 Scenario) 123
And this result has now been locked into place by the reality
of information bubbles. There does not seem to be any prospect of
re-creating a broad bipartisan majority on any issue, because no broad
collection of Americans can be assembled for anything. Information
professionals, whether corporate marketers, campaign managers, or
issue entrepreneurs, have in fact given up trying. They are now entirely
in the segmentation and specialization business. The problem for
democracy is that this business works very well to persuade small num-
bers of people of something that fits into their preconceptions, but it is
self-cancelling when it comes to building large-scale social consensus.
An accompanying trend is the fragmentation of expert commu-
nities, or epistemic communities, that had played some role in draw-
ing together scientific knowledge. There are well-established, compet-
ing subgroups in most expert fields that regularly attack one another
through social media, wars that are spilling over into academic journals
and other forums. In some cases, the visions seem relatively arbitrary;
they are not all ideological, and some subgroups seem to form around
specific personalities or academic theories. Increasingly, the focus of
academic activity is to discredit competing academic silos; specific
departments are associated with one or another silo, as they previously
had been associated with a school of thought.
There are parallel silos at the global level. Some autocratic states
have had success sustaining national-level narratives with at least some
residual appeal amid the fragmentation—far more so than democratic
societies. Major autocracies are making use of these trends by trying
to create and then wall off massive echo chambers that equate to their
national or ethnic populations, including their diasporas. One way
they are exporting their narratives is by forcing conditions on foreign
companies and organizations for the right to engage with them. Even
by 2018, for example, China had forced over 500 academic journals to
blot out a handful of selected words—including “Tiananmen,” “Dalai
Lama,” and “Tibet”—from their articles.3 (The online versions of the
3 China’s censorship efforts in this area are recounted in Evan Osnos, “Making China
Great Again,” New Yorker, January 8, 2018.
124 The Emerging Risk of Virtual Societal Warfare
One trend closely associated with the growth of echo chambers has been
the explosion of the trolling ethic: widespread cyberharassment that
characterizes the “wars of the silos.” Despite growing efforts by several
social media platforms, websites, and governments, the infosphere has
become, in this reality, a notably crueler and more intimidating place.4
Cyberharassment and bullying have their roots partly in the mas-
sive trolling community that emerged a decade ago, on sites where
angry, ironic bands of self-styled mischief-makers gathered to launch
massive campaigns against any target that sparked their ire. One of
the most infamous campaigns was Gamergate, in which women in the
gaming community were subject to vicious, misogynistic, and some-
times brutally threatening online attacks.5 These communities of cyber
storm troops have both fragmented and metastasized, and the web is
increasingly a place of anarchic wars of all-against-all in which waves of
cyberattack are met with equally comprehensive ripostes.
A decade ago, conducting such an attack was a laborious act of
cybercraftsmanship. Each step had to be hand coded. Today, such
attacks are driven by AI engines custom-built to destroy people’s lives
using cyber means. These attack bots have a repertoire of thousands
of potential actions to take and align an initial strategy with a target’s
seeming vulnerabilities, using various algorithmic guidance. They then
track responses, such as public statements by or about the targets, evi-
dence of actions they have taken, and the changing shape of available
data about them, and escalate or tailor the ongoing campaign to what
they see. All it takes now is for an official in one of the cyberaggres-
sor countries to push a button, and a life is effectively destroyed. And
because of the sophistication in spoofing and other means of conceal-
ment, AI-conducted cyberattacks are exceptionally difficult to trace.
Many of these attacks increasingly make use of the IoT as an
avenue for harassment. An especially prominent commentator on one
side of a silo war may come home to find that the WiFi-connected ther-
mostat has been reprogrammed to overheat his or her house, or that the
smart refrigerator has ordered a hundred gallons of milk. More sinister
attacks have gone after the health of the targets, manipulating the set-
tings of their web-enabled pacemaker or insulin pump and modifying
the algorithm at their doctor’s office that processes test results.
The result is an era in which the price for speaking up online—
and, in especially virulent silo wars, the price of merely viewing certain
content—can be very high. The danger of such information aggression
has had a widespread chilling effect on public dialogue.
As an ultimate response, small groups of people who had been
targeted began to join the “Off the Net” communities. Known col-
lectively as “Off the Netters,” these people had decided to back out
of the public online world. While they made extensive use of cutting-
edge technologies, such as renewable power and robotic medicine, they
cut themselves off entirely from networked information systems: social
media, the big five technology companies, smart homes, and the inter-
net itself.6 They established local banks, hospitals, grocery stores (an
innovation that had not been seen since the Amazon Food Warehouse
revolution of the early 2020s), all of which were entirely off-line. No
data is collected on anyone in digital form except medical records,
essential for effective diagnostics, and those are housed in air-gapped
servers protected by high-level security. Several of these communities
(which have now grown to contain more than 8 million residents in the
United States and an equal number in Europe) began reaching out to
6 One irony of this development is that Off the Net communities at first had to reassemble
construction firms that knew how to build a “dumb” home, without a high-speed nervous
system of information pipes to connect and run every electronic system. Smart homes had
become so ubiquitous that most construction companies had forgotten how to build any-
thing else.
126 The Emerging Risk of Virtual Societal Warfare
7 David E. Sanger, David D. Kirkpatrick, and Nicole Perlroth, “The World Once Laughed
at North Korean Cyberpower. No More,” New York Times, October 15, 2017.
Future 2: Silos of Belief (2024 Scenario) 127
nies generally refuse any project that they believe will offend any of
the world’s major cyberpowers. China has employed trolls and bots
to crush any discussion online of unwelcome topics, including Tibet.
These attacks have included direct harassment as well as the equivalent
of denial-of-service attacks used to flood certain discussions.8
In a world of blurring boundaries between public and private
actors, national governments do not have to undertake such campaigns
of harassment directly. In many cases they can merely turn loose script
kiddies: people willing to act as online proxies for the aggressive social
manipulator. The larger echo chambers online have associated “mili-
tias” whose job it is to police countervailing opinions and launch coun-
terattacks in response to any aggression against the silo of belief. Such
retaliatory attacks have sometimes been empowered with innovative
funding techniques. Hostile actors have crowdfunded the efforts, with
Russian state agencies, for example, crowdsourcing “patriotic” Rus-
sians (and others around the world anxious to degrade U.S. power) to
pay coders to attack U.S. targets.9
Another leading trend in these aggressive practices has been their
growing precision through efforts to target specific foreign individuals
marked as threats to the regimes. Many of these efforts take the form
of classic cyberbullying, online harassment, and identity theft, includ-
ing compromising targets’ personal information, taking out loans or
credit cards in their name, and sending threatening messages to home
and work email accounts. But some campaigns have gone well beyond
those prosaic approaches to many other forms of cyberharassment: cre-
ating false websites with allegedly compromising information; generat-
ing faked videos using high-grade digital mimicry programs that alleg-
edly show the targets stealing, killing, or in intimate contexts; hacking
official databases to corrupt the targets’ tax or police records; sending
critical emails to dozens of friends and colleagues; and hacking targets’
10 Many of these examples are drawn from the case described in Brooke Jarvis, “Me Living
Was How I Was Going to Beat Him,” Wired, December 2017. It cites one statistic that by
2016 over 10 million Americans reported that they had been threatened with, or had experi-
enced, the unauthorized sharing of explicit images online.
Future 2: Silos of Belief (2024 Scenario) 129
11 Danah Boyd, “Your Data Is Being Manipulated,” Data and Society, Points, October 4,
2017.
CHAPTER SEVEN
131
132 The Emerging Risk of Virtual Societal Warfare
ple’s choices of movies, their web searches, their inputs to online dic-
tionaries, and their tweets and social media comments have generated a
fantastic amount of data. By 2016, Acxiom, Cambridge Analytica, and
similar firms had collected between 1,000 and 5,000 individual pieces
of information about every American.3 With the advent of the IoT,
that stream exploded and became increasingly interlinked. Billions of
devices are now connected to shared data systems, and they are getting
progressively smaller, with many smart sensors now smaller than the
human eye can see. Some estimates suggest that the 500 billion IoT
devices in place already account for nearly $3 trillion in world gross
domestic product (GDP).4
Ubiquitous sensing and data collection now gathers information
on a million distinct subjects, including how quickly people drink their
milk, as sensed by their smart refrigerator; the quality of their per-
sonal waste, as assessed by their smart toilet; what they say as children,
as recorded and archived by their smart toys;5 how long they linger
on stories about female U.S. novelists as opposed to male British film
stars, as reported by their news subscriptions on their iPad; the precise
measurements of their body as well as a hundred data points (many
personality-related) that help virtual fashion assistants choose the right
wardrobe for them;6 and the ideas they express in their social media
posts, as tracked, collated, and analyzed by AI-driven bots. Every day,
terabytes of such data join the troves of personal information available
on public and easily hackable databases, including the finest details of
their medical, psychological, and educational histories, all now stored
together to allow machine-learning analytics targeted at well-being,
with their daily mood and location (and persistent location history)
tracked by their new FitBit Emote or other mobile fitness-tracking
device. By 2021, almost 250 million such wearable devices were being
sold every year.7
The Cloud (the term people are now using as a shorthand descrip-
tion of the mass of data hanging over their lives and crowding into
every choice and opportunity) knows whether your children are on
attention deficit/hyperactivity disorder (ADHD) medication, what
their latest grades were, and the relationship between the two (and
the relationship of each to a thousand other variables). It knows what
their teachers think about them to a far greater degree than what their
parents know. It knows who your friends are, knows about their habits
and preferences, and knows how to use those data to predict your
behavior, to a fine statistical probability. It knows the language you
use when writing—favorite words and phrases, common grammati-
cal errors, etc.—and what this says about your personality and prefer-
ences. It knows where you have driven your car, as gathered by GPS-
enabled sensors (now standard in essentially every new vehicle sold in
the United States) and communicated through the auto company, and,
as a result, can make strong inferences about your behavior.8 (Regular
trips to the liquor store tell the system one thing; a sudden spate of
stops at an urgent care center would tell it something else.) Through
persistent surveillance and facial recognition, it knows what mood you
are in and, in some cases, can approximate what you are thinking.
Increasingly, in fact, every item in society, whether financial,
social, or political, is less important for what it is than for the data it
gathers and transmits. Automobiles now have 200 to 300 times more
lines of code than the original space shuttle.9 A child’s toy has the
processing power of early supercomputers. Dolls, iPads, exercise equip-
ment, diabetic sensors, and much more are most valuable for what
they tell The Cloud—and those who seek to profit from it—about
their users. Companies are selling the physical items as loss leaders
7 Patrick Tucker, “Strava’s Just the Start: The US Military’s Losing War Against Data
Leakage,” Defense One, January 31, 2018.
8 Peter Holley, “Big Brother on Wheels: Why Your Car Company May Know More About
You Than Your Spouse,” Washington Post, January 15, 2018.
9 The number in 2018 was 200 times greater (Holley, 2018).
Future 3: The Rise of the Algorithms (2026 Scenario) 135
10 On the growth of neural networks capable of data analysis, see Cade Metz, “Finally,
Neural Networks That Actually Work,” Wired, April 21, 2015.
11 China has a parallel set of firms—AliBaba, Baidu, Tencent, and others—but they have
become largely walled off from the outside internet. Their reach beyond China is confined
to secondary networks that are air-gapped from the core mainland networks and serve the
ethnic Chinese diaspora throughout Asia.
136 The Emerging Risk of Virtual Societal Warfare
Interactive Data
12 Pariser, 2011, p. 8.
Future 3: The Rise of the Algorithms (2026 Scenario) 137
and then take actions to advance preprogrammed goals. With the IoT
and other networked aspects of a broadly unified cloud, the messages,
ads, tracing functions, and other contact points of this process follow
users from device to device, site to site, and place to place. The Cloud
observes and responds to actions without anyone being involved. In
many important respects, it is a driverless network. “The algorithms
that orchestrate our ads are starting to orchestrate our lives,” Pariser
wrote in 2011.13 Today, in 2026, the effect is ubiquitous.
And The Cloud increasingly operates in real time. Long gone
are the days when someone’s digital exhaust was laboriously gathered
into databases that could be weeks or even months old. Now, people
interact with a dozen instantaneous information-gathering sensors:
watches, implants, built-in cameras and microphones, social media
platforms, and more.14 Radio-frequency identification (RFID) devices
have been implanted in just about all meaningful items you purchase,
allowing them to be tracked when they pass within range of any RFID
sensor—of which there are now billions spread throughout the coun-
try. Even your possessions are generating digital exhaust.15 The result is
a constantly updated sense of behavior and preferences that produces
messages (such as ads) and is then iterated based on the reactions to
those messages.
As a result, interactive platforms and systems have had to become
far more responsive and nimbler. By 2019, basic personalized ad sys-
tems had arrived in stores, consisting of video banners and speakers
that would offer specific products and discounts to specific individuals
as they walked by, sometimes broadcast on their AR headsets (or gener-
ated as AR cartoons in the images captured by their smartphones). At
first, though, they were single, inflexible messages. Within 18 months,
that gave way to an agile, responsive engagement: The system would
throw out an ad, gauge the emotional and biophysical response, see if
13 Pariser, 2011, p. 9.
14 By 2021, a third of Americans had accepted tiny implants in their forearms designed
to convey health data to medical professionals, but which also offered marketers
second-by-second readings of emotional reactions to advertisements and products.
15 Pariser, 2011, p. 198.
138 The Emerging Risk of Virtual Societal Warfare
the person was slowing to look or think, and then adapt the message.16
It could lower the price, toss in one of a number of “nudges” (grounded
in behavioral economics insights) judged to be effective with this indi-
vidual, offer a message from a virtual avatar of a famous person, or
more. The Cloud had empowered the world to interact with people on
a constant basis in a highly personalized way.
Part of the problem, though—the significance of which few
anticipated early on—is that even the designers of the algorithms often
do not quite know why they spit out the results they do. This was clear
enough at the beginning: Google coders, authors of some of the most
sophisticated search algorithms on the planet, could build the equa-
tion and watch the data come in, but, at a certain point, there were so
many variables involved that they could no longer follow the causal
links to the outputs. They simply could not explain why their algo-
rithmic machines generated the results they did. This mystery was less
important when, for example, they could not explain the precise results
of an internet search. In 2026, with algorithms generating conclusive
social choices on everything from health care decisions to mortgage
approvals and criminal sentences, people are starting to object to the
standard-issue answer that “the numbers don’t lie.” Nobody knows,
frankly, whether they are lying or not. All anyone knows are the high-
level associational patterns that seem to prove the algorithms are work-
ing. But no one can know for sure whether any specific case—such as
an output that recommends heart surgery instead of medication or a
ten-year sentence instead of five—is an outlier.
One surprising source of data has come from the explosion of
chatbots over the last decade. One of the first to gain widespread use
and reaction was Microsoft’s Tay, which served as a powerful warning
of the risks of interactive machine learning. Trolls decided to corrupt
the system and flooded it with comments in the voice of Nazi sympa-
thizers, and Tay, “learning” from its interactions, began to repeat back
those comments to many unsuspecting users. Subsequent efforts have
become much more reliable and realistic: A Chinese version (Xiaoice,
also from Microsoft) quickly followed, and by 2016 was producing
17 See Hannah Devlin, “Human-Robot Interactions Take Step Forward with ‘Emotional’
Chatbot,” The Guardian, May 5, 2017; Liz Tracy, “In Contrast to Tay, Microsoft’s Chinese
Chatbot, Xiaolce, Is Actually Pleasant,” Inverse, March 26, 2016; and Taylor Soper, “Why
People in China Love Microsoft’s Xiaoice Virtual Companion, and What It Says About Arti-
ficial Intelligence,” GeekWire, November 25, 2015.
140 The Emerging Risk of Virtual Societal Warfare
What users often do not realize, even though the dense user agree-
ments make it clear, is that everything they say or do when engaging a
chatbot is being recorded, processed, and evaluated for use by other ele-
ments of the IoT and The Cloud. Their opinions, thoughts, reactions
to ideas raised by the chatbot, offhand comments, and even what they
might be doing (doodling, knitting, multitasking with a phone, etc.)
while talking to the bot are recorded. Selling “engagement time” on
chatbots is now a huge market. A company might buy 30 seconds on a
cooking education bot you use, direct it to suggest to you a particular
product, sense your reaction, modify and iterate, and then track your
later purchases to see if you buy it.
In some locations, The Cloud has come to include an ongoing
record of recent events, which some have taken to calling a “digital
past.” A combination of closed-circuit television (CCTV) cameras, pri-
vate security cameras that have agreed to be linked into The Cloud,
and a web of constantly circling drones maintain an ongoing video
portrait of a given city, generating a form of persistent surveillance.
When a crime happens, police can go back to the digital record for
that moment and then work backward, discovering the route of the
criminals before the crime, or forward, tracing their movements.18 Law
enforcement departments across the country, partly funded by gen-
erous donations from law-and-order–focused wealthy philanthropists,
are building a shared database of photos of everyone they arrest, which
can be used in concert with pervasive facial recognition (PFR) to locate
suspects.19
That backward-looking capability has been linked to real-time
surveillance in the form of PFR systems. In most urban areas today,
people who do not intentionally evade detection will be constantly
scanned by high-resolution facial recognition technologies capable of
18 Such a system has already been used to track insurgents in Iraq and has been deployed on
a trial basis in Dayton, Ohio. See “Eye in the Sky,” Radiolab, June 18, 2015.
19 Pariser, 2011, pp. 194–195.
Future 3: The Rise of the Algorithms (2026 Scenario) 141
20 For a description of the technologies being developed as of 2018 by the Chinese firm
SenseTime, see Osnos, 2018. The jaywalking and toilet paper examples that follow are
drawn from this account.
21 Zheping Huang, “Chinese Police Are Wearing Sunglasses That Can Recognize Faces,”
Defense One, February 9, 2018.
22 Wu et al., 2012.
23 Suggested in Yuval Noah Harari, “Big Data, Google, and the End of Free Will,” Finan-
cial Times, August 26, 2016.
142 The Emerging Risk of Virtual Societal Warfare
How about I order your favorite Chinese dish? I can have it here
in 23 minutes, and there’s still a Duvel beer left in the back of the
fridge. The money you set aside for food this week still has $50
left, plenty for the order. Or I can suggest eight approved recipes
you could put together with the food you have in the house, and
we can walk through them together while I play some nice jazz
in the kitchen.
25 On the general trend and its risks, see Danah Boyd, “Beyond the Rhetoric of Algorithmic
Solutionism,” Data and Society: Points, January 11, 2018.
26 This concept emerged in part from the writings on behavioral economics and the analysis
of how best to “nudge” people to make “more-accurate” decisions. Scholars writing in this
field cataloged human “irrationalities”—consciously choosing to earn less interest than they
might, for example—and sought to “correct” these anomalies with hints or implicit influ-
ence. A prominent example was in altering the default options on certain elective choices—
saving for retirement, for example. This principle of determining objectively more efficient
outcomes and presuming human choice to match them has now become generalized and
superempowered by The Cloud.
27 Scott Magids, Alan Zorfas, and Daniel Leemon, “The New Science of Customer Emo-
tions,” Harvard Business Review, November 2015.
144 The Emerging Risk of Virtual Societal Warfare
run by Amazon. Big data showed that most people’s weekly food pur-
chases were 73 percent standard, so why waste the time of driving to a
19th-century grocery store and going through the same annoying pro-
cess every week? Smart refrigerators and preference algorithms, which
generate experimental purchases on the same “if you liked that, you
will love this” principle of Amazon’s website, make people’s choices
for them. Consumers can always override the system, but few do, and
surveys suggest that 90 percent of customers are satisfied with the deci-
sions The Cloud makes for them.
Around 2016, the scholar Yuval Noah Harari began calling this
mindset “Dataism.” “Given enough biometric data and computing
power,” he explained, “this all-encompassing system could understand
humans much better than we understand ourselves. Once that hap-
pens, humans will lose their authority, and humanist practices such as
democratic elections will become as obsolete as rain dances and flint
knives.”28 This loss of authority has now essentially occurred, with a
combination of sensors capable of gathering millions of discrete bio-
chemical, neurological, behavioral, and attitudinal data points on a
second-by-second basis and translating them into algorithmically based
preferences. The Dataists can rightly argue that on most choices, the
system does, as Harari worried, “understand my feelings much better
than I can,” and therefore makes objectively more-accurate decisions.29
As long as a decade ago, for example, it was established that algorithms
could make better judgments about people’s personalities, based on
their digital exhaust, than humans could.30
The POPE has been extended to other categories of purchases,
such as clothing, cars, and even houses. The Cloud often knows what
you want better than you do, in that it is a more objective evaluator of
preferences than your own bias-fueled decision engine. The principle
was long established in online dating sites, which have now become
28 Harari, 2016.
29 Harari, 2016.
30 Wu Youyou, Michael Kosinski, and David Stilwell, “Computer-Based Personality Judg-
ments Are More Accurate Than Those Made by Humans,” Proceedings of the National Acad-
emy of Sciences, Vol. 112, No. 4, 2015.
Future 3: The Rise of the Algorithms (2026 Scenario) 145
and warned, though society has not yet agreed to any actual punish-
ments for precrime intentions.32
Innovative scholars and programmers are now beginning to toy
with extending the POPE into politics. For the most part, the politi-
cal institutions of the developed world have continued to operate as
they were in the pre-Cloud days. (People increasingly refer to primi-
tive, time-consuming, and inefficient deliberative judgment on proba-
bilistic issues as “BTC [Before The Cloud] Junk.”) Politicians employ
all manner of sophisticated AI-driven advertising techniques, but the
essential structures and processes of legislatures and executives have
remained unchanged. The biggest difference has been in what decisions
are being made: One effect of an intelligent, data- and AI-driven cloud
has been to narrow dramatically the space for politics. The Cloud has
rendered a hundred social issues as technical probability challenges,
including education, law enforcement, poverty reduction, and energy
security. In so many areas, the POPE has replaced conscious, dialogue-
driven public choice as the way society applies resources and makes
judgments. And because the results are good—and measurable—
people are generally fine with this outcome.
Now there are proposals to essentially trade out the remaining
openly political decisions for the POPE. The Cloud can know, to a
high degree of probability, what people’s choices, behaviors, thoughts,
expressed ideas, and implied beliefs suggest they will want in a social
system. And it can build algorithms to create the optimal satisfac-
tion of the highest number of such preferences. It has the potential
to become, in effect, an AI-driven automated version of the rational-
ization of interests that the Founding Fathers believed would happen
through clash and compromise. All of that rationalization can happen
inside an equation, without the costs, distractions, tensions, and some-
times outright conflict of an open political process. It presents the same
choice, in the end, as when buying groceries: If The Cloud knows what
32 In less-open countries, such as Russia and China, the situation is very different. The gov-
ernments are rumored to have set their threshold for warnings at a 60 percent likelihood of
committing crimes, and they arrest and imprison anyone with a likelihood over 90 percent.
Those thresholds are for traditional crimes; for political disloyalty, the thresholds are much
lower.
Future 3: The Rise of the Algorithms (2026 Scenario) 147
people will end up preferring, why not empower it to take the actions
necessary to fulfill that preference?
There have been glitches and there are risks, to be sure. Any large data
network is hackable, and profit-driven hackers have used every possible
angle to siphon resources off The Cloud. They steal FaceCoins and
AlphaCash; reroute driverless electric trucks full of groceries to black
market distribution points; and grab personal data and use it, as they
have for decades, to fuel identity theft. An especially significant trend
has been the use of supercharged ransomware attacks to lock down
major pieces of the interlinked Cloud (typically things tied into the
IoT), forcing either individual users or major corporations to pay mas-
sive amounts to unlock the information.
By and large, though, the system has proved more resilient than
many feared. This resilience has been a product of two things. First,
the very denseness and interconnectedness of the network turn out to
make it less vulnerable. There are very few single points of failure and
many backup systems and capabilities. Second, the most powerful AI
advances are proprietary to the big five firms, and they are deployed
to protect the stability of The Cloud. They anticipate, sense, and hunt
down various efforts to undermine or steal from it. The result is noth-
ing like perfect but has been largely good enough.
Another risk or popular vulnerability of The Cloud has been the
sense that it increasingly reflects a series of constraints on individual
freedom. Citizens of advanced democracies have increasingly begun to
see aspects of this algorithmic reality as sinister, slightly muted versions
of China’s infamous “social credit score”: a cruelly simplified number
that reflects a person’s reliability in economic and political terms.33
Even in the United States, access to key social goods, including loans
and jobs, as well as the best schools, doctors, and hospitals, is deter-
33 For a description of how the social credit score is working, see Mara Hvistendahl, “Inside
China’s Vast New Experiment in Social Ranking,” Wired, December 14, 2017.
148 The Emerging Risk of Virtual Societal Warfare
34 Hackers who specialize in such attacks have become known as “tuners.” There are sites
on the dark web specifically devoted to this practice, where tuners share techniques, vulner-
abilities of algorithmic systems, and success stories.
150 The Emerging Risk of Virtual Societal Warfare
1 Braden R. Allenby, “The Age of Weaponized Narrative, or, Where Have You Gone,
Walter Cronkite?” Issues in Science and Technology, Vol. 33, No. 4, Summer 2017, p. 68.
153
154 The Emerging Risk of Virtual Societal Warfare
systems the size of insects (or eventually even smaller) in direct, drone-
based attacks highly targeted to individuals or locations.4
4 Evans, 2015.
156 The Emerging Risk of Virtual Societal Warfare
5 Tim Hwang, Maneuver and Manipulation: On the Military Strategy of Online Information
Warfare, Carlisle Barracks, Pa.: Strategic Studies Institute, 2019.
158 The Emerging Risk of Virtual Societal Warfare
6 Henry Farrell, “American Democracy Is an Easy Target,” Foreign Policy, January 17,
2018.
7 Benjamin Wittes and Gabriella Blum, The Future of Violence: Robots and Germs, Hackers
and Drones, New York: Basic Books, 2015, pp. 79–80.
The Emerging Risk of Virtual Societal Warfare 159
erected a strong barrier between the public and the private, and could
offer advantages to governments that blend such elements of society,
such as China.
Conflict will increasingly be waged between and among networks.
Wittes and Blum have described one implication of trends related to
virtual societal warfare as a “many to many” pattern of security interac-
tions. Every nation, company, group, and individual can threaten any
other, they write. “We are thus in a moment unlike any other in the
history of the world, one in which distance does not protect you and in
which you are at once a figure of great power and great vulnerability.”8
The result is war of networks against networks. “States will increas-
ingly fight future wars using a multitude of partner organizations.”9
This pattern is emerging, for example, in the complex, international
network of hackers, activists, and informal propagandists being
employed by Russia as part of its information campaigns and in Chi-
na’s use of Chinese citizens and ethnic Chinese abroad to further its
control over key narratives. State actors are likely to develop such net-
works to avoid attribution and also strengthen their virtual societal
warfare capabilities against retaliation: It will be much more difficult
to understand, maintain an accurate portrait of, and hit back against a
shadowy global network.
Already today, another implication of this networked model of
conflict is becoming apparent: the rise of what has been called a “pri-
vateering” approach to key security functions. “Privateers funded their
own operations and made money by keeping the ‘prizes’ they seized,”
Wittes and Blum explain. “Precisely because of this decentralization,
privateering became a tool for mustering private capital, private energy,
and private risk in the service of public military objectives.” This model
is emerging not only in the use of private information security firms to
sustain the health of a nation’s information networks, but in the model
of hackers using ransomware and other techniques to profit from their
As such long-term discussions are under way, there are several steps
that the United States and other democracies could take to shore up
their resilience against these threats. A major challenge is determining
how to organize the response to such threats within the U.S. govern-
ment. There is no obvious home for “infosphere security.” The ques-
tion of institutional structures is beyond the scope of this analysis but
must be addressed alongside the substantive reforms that can help mit-
igate these risks. Some of these reforms are outlined in the following
sections.
16 Nina Jankowicz, “The Only Way to Defend Against Russia’s Information War,” New
York Times, September 25, 2017.
17 For information on this program, see IREX, Learn to Discern (L2D)—Media Literacy
Training, undated.
18 Helmus and Bodine-Baron, 2017, pp. 2–3.
166 The Emerging Risk of Virtual Societal Warfare
Take Seriously the Leading Role Played by Social Media Today, and
the Precedent-Setting Character of Many of the Information Control
Debates Playing Out in That Realm
Governments should increasingly look to actions that can incentivize
social media platforms to solve the problems themselves to the great-
est degree possible. In the process, governments should identify four to
five things that the platforms can do over the next two to three years
to make a dent in the problem.
Many social media companies are beginning to move in this
direction, though perhaps more slowly than the risks warrant. Face-
book is adding a button on news feed links that will connect to the
Wikipedia site for the organization that published or posted that news
story. The idea is to give users an ability to quickly get a sense of the
credibility of the source.19 Facebook has also established a set of out-
side fact checking organizations to identify articles that might be mis-
information. It considered revisions to the algorithm that would have
systematically downgraded articles hitting certain tripwires for sensing
fabricated information—but then backed off when tests showed that
it would disproportionately hit conservative sites. It then experimented
with both user-driven and algorithmic responses (using keywords and
phrases to identify them), eventually settling on the latter because it
19 Josh Constine, “Facebook Tries Fighting Fake News with Publisher Info Button on
Links,” TechCrunch, October 5, 2017.
The Emerging Risk of Virtual Societal Warfare 167
20 Josh Constine, “Facebook Chose to Fight Fake News with AI, Not Just User Reports,”
TechCrunch, November 14, 2016.
168 The Emerging Risk of Virtual Societal Warfare
Adamic, Lada, and Natalie Glance, “The Political Blogosphere and the 2004 U.S.
Election: Divided They Blog,” paper presented at the Second Annual Workshop
on the Weblogging Ecosystem, Chiba, Japan: Association for Computing
Machinery, 2005.
Adler, Simon, “Breaking News,” Radiolab, July 27, 2017. As of January 23, 2019:
http://www.radiolab.org/story/breaking-news/
Adobe Inc., “#VoCo. Adobe MAX 2016 (Sneak Peeks),” November 4, 2016. As of
January 23, 2019:
https://www.youtube.com/watch?v=I3l4XLZ59iw
Aharanov, Alex, “What Is the Future of Augmented and Virtual Reality?” Jabil
Blog, March 27, 2018. As of March 5, 2019:
https://www.jabil.com/insights/blog-main/future-of-augmented-and-virtual-
reality-technology.html
Ahluwalia, Rohini, “Examination of Psychological Processes Underlying
Resistance to Persuasion,” Journal of Consumer Research, Vol. 27, No. 2, 2000,
pp. 217–232.
Allen, Greg and Taniel Chan, Artificial Intelligence and National Security,
Cambridge, Mass.: Belfer Center for Science and International Affairs, Harvard
Kennedy School, July 2017.
Allenby, Braden R., “The Age of Weaponized Narrative, or, Where Have
You Gone, Walter Cronkite?” Issues in Science and Technology, Vol. 33, No. 4,
Summer 2017.
Allport, Gordon, “Attitudes,” in Carl Murchison, ed., A Handbook of Social
Psychology, Worcester, Mass.: Clark University Press, 1935.
Anand, Bharat N., “The U.S. Media’s Problems Are Much Bigger than Fake News
and Filter Bubbles,” Harvard Business Review, January 5, 2017. As of January 20,
2019:
https://hbr.org/2017/01/
the-u-s-medias-problems-are-much-bigger-than-fake-news-and-filter-bubbles
171
172 The Emerging Risk of Virtual Societal Warfare
Andrasik, Andrew J., Hacking Humans: The Evolving Paradigm with Virtual
Reality, SANS Institute, Information Security Reading Room, November 2017. As
of January 22, 2019:
https://www.sans.org/reading-room/whitepapers/testing/
hacking-humans-evolving-paradigm-virtual-reality-38180
Arquilla, John, and David Ronfeldt, The Emergence of Noopolitik: Toward an
American Information Strategy, Santa Monica, Calif.: RAND Corporation,
MR-1033-OSD, 1999. As of March 1, 2019:
https://www.rand.org/pubs/monograph_reports/MR1033.html
Arsene, Codrin, “IoT Ideas That Will Soon Revolutionize Our World in 8 Ways,”
Y Media Labs, November 24, 2016. As of January 22, 2019:
https://ymedialabs.com/internet-of-things-ideas/
Auletta, Ken, “How the Math Men Overthrew the Mad Men,” New Yorker,
May 21, 2018. As of January 20, 2019:
https://www.newyorker.com/news/annals-of-communications/
how-the-math-men-overthrew-the-mad-men
Bakshy, Eytan, Jake M. Hofman, Winter A. Mason, and Duncan J. Watts,
“Everyone’s an Influencer: Quantifying Influence on Twitter,” Fourth ACM
International Conference on Web Search and Data Mining, Conference Proceedings,
Hong Kong: Association for Computing Machinery, 2011, pp. 65–74.
Bakshy, Eytan, Solomon Messing, and Lada Adamic, “Exposure to Ideologically
Diverse News and Opinion on Facebook,” Science, Vol. 348, No. 6239, 2015,
pp. 1130–1132.
Barabási, Albert-László, Linked: How Everything Is Connected to Everything Else
and What It Means for Business, Science, and Everyday Life, New York: Basic Books,
2014.
Bennett, W. Lance, and Shanto Iyengar, “A New Era of Minimal Effects? The
Changing Foundations of Political Communication,” Journal of Communication,
Vol. 58, No. 4, 2008, pp. 707–731.
Berger, Jonah, Contagious: Why Things Catch On, New York: Simon and Schuster,
2013.
Bialik, Kristen, and Katerina Eva Matsa, “Key Trends in Social and Digital News
Media,” Pew Research Center, October 4, 2017. As of January 20, 2019:
http://www.pewresearch.org/fact-tank/2017/10/04/
key-trends-in-social-and-digital-news-media/
Biocca, Frank, “Viewers’ Mental Models of Political Messages: Toward a Theory
of the Semantic Processing of Television,” in Frank Biocca, ed., Television and
Political Advertising, Vol. I: Psychological Processes, Hillsdale, N.J.: Lawrence
Erlbaum Associates, 1991.
References 173
Bulger, Monica, and Patrick Davison, The Promises, Challenges, and Futures of
Media Literacy, New York: Data and Society Research Institute, February 2018. As
of January 20, 2019:
https://datasociety.net/pubs/oh/DataAndSociety_Media_Literacy_2018.pdf
Cai, Haoye, Chunyan Bai, Yu-Wing Tai, and Chi-Keung Tang, “Deep Video
Generation, Prediction and Completion of Human Action Sequences,” European
Conference on Computer Vision 2018, Conference Proceedings, Munich, Germany,
September 2018. As of March 4, 2019:
https://arxiv.org/pdf/1711.08682.pdf
Campbell, Troy, and Justin Friesen, “Why People ‘Fly from Facts,’” Scientific
American, March 3, 2015. As of January 20, 2019:
https://www.scientificamerican.com/article/why-people-fly-from-facts/
Cardenal, Juan Pablo, Jacek Kucharczyk, Grigorij Mesežnikov, and Gabriela
Pleschová, Sharp Power: Rising Authoritarian Influence, Washington, D.C.:
National Endowment for Democracy, December 2017.
Carey, John M., Brendan Nyhan, Benjamin Valentino, and Mingnan Liu, “An
Inflated View of the Facts? How Preferences and Predispositions Shape Conspiracy
Beliefs About the Deflategate Scandal,” Research and Politics, Vol. 3, No. 3, July–
September 2016, pp. 1–9.
Casey, Michael J., and Paul Vigna, “In Blockchain We Trust,” MIT Technology
Review, April 9, 2018.
Castells, Manuel, The Rise of the Network Society, New York: Wiley-Blackwell,
2009.
Chan, Man-pui Sally, Christopher R. Jones, Kathleen Hall Jamieson, and
Dolores Albarracín, “Debunking: A Meta-Analysis of the Psychological Efficacy
of Messages Countering Misinformation,” Psychological Science, Vol. 28, No. 11,
2017, pp. 1531–1546.
Cheng, Justin, Lada Adamic, P. Alex Dow, Jon Michael Kleinberg, and Jure
Leskovec, “Can Cascades Be Predicted?” Twenty-Third International World
Wide Web Conference, Conference Proceedings, Seoul: Association for Computing
Machinery, 2014, pp. 925–936.
Chessen, Matt, “Understanding the Psychology Behind Computational
Propaganda,” in Shawn Powers and Markos Kounalakis, eds., Can Public
Diplomacy Survive the Internet? Washington, D.C.: U.S. Advisory Commission on
Public Diplomacy, 2017a, pp. 19–24.
———, The MADCOM Future, Washington, D.C.: Atlantic Council, 2017b.
Cialdini, Robert, Pre-Suasion: A Revolutionary Way to Influence and Persuade, New
York: Simon and Schuster, 2016.
Clarke, Richard A., Cyber War: The Next Threat to National Security and What to
Do About It, New York: Ecco, 2010.
References 175
Cobb, Michael D., Brendan Nyhan, and Jason Reifler, “Beliefs Don’t Always
Persevere: How Political Figures Are Punished When Positive Information About
Them Is Discredited,” Political Psychology, Vol. 34, No. 3, June 2013, pp. 307–326.
Constine, Josh, “Facebook Chose to Fight Fake News with AI, Not Just User
Reports,” TechCrunch, November 14, 2016.
———, “Facebook Tries Fighting Fake News with Publisher Info Button on
Links,” TechCrunch, October 5, 2017.
Cook, John, Ullrich K. H. Ecker, and Stephan Lewandowsky, “Misinformation
and How to Correct It,” in Robert Scott and Stephan Kosslyn, eds., Emerging
Trends in the Social and Behavioral Sciences, New York: John Wiley and Sons, 2015.
———, “Neutralizing Misinformation Through Inoculation: Exposing
Misleading Argumentation Techniques Reduces Their Influence,” PLOS One,
Vol. 12, No. 5, 2017, pp. 1–21.
Cummings, M. L., Heather M. Roff, Kenneth Cukier, Jacob Parakilas, and
Hannah Bryce, Artificial Intelligence and International Affairs: Disruption
Anticipated, London: Chatham House, 2018.
Cutsinger, Paul, “Mark Cuban: Voice, Ambient Computing Are the Future.
Developers Should Get in Now,” Amazon Alexa Blog, March 25, 2019. As of
April 19, 2019:
https://developer.amazon.com/blogs/alexa/post/0902e3c5-9649-47e5-b705-
984666b85125/mark-cuban-voice-ambient-computing-are-the-future-and-why-
developers-should-get-in-now
DeMers, Jayson, “7 New Technologies Shaping Online Marketing for the Better
(We Hope),” Forbes, August 15, 2016. As of January 22, 2019:
https://www.forbes.com/sites/jaysondemers/2016/08/15/7-new-technologies-
shaping-online-marketing-for-the-better-we-hope/#4d8173761fd6
Devlin, Hannah, “Human-Robot Interactions Take Step Forward with
‘Emotional’ Chatbot,” The Guardian, May 5, 2017. As of January 23, 2019:
https://www.theguardian.com/technology/2017/may/05/human-robot-interactions-
take-step-forward-with-emotional-chatting-machine-chatbot
Diaz, Jesus, “The Weird, Wild Future of CGI,” Fast Company, October 19, 2017.
As of January 23, 2019:
https://www.fastcodesign.com/90147151/what-lies-beyond-the-uncanny-valley
Druckman, James, “On the Limits of Framing Effects: Who Can Frame?” Journal
of Politics, Vol. 63, No. 4, 2001, pp. 1041–1066.
Duggan, Maeve, and Aaron Smith, “The Political Environment on Social Media,”
Pew Research Center, October 25, 2016.
176 The Emerging Risk of Virtual Societal Warfare
Eagly, Alice H., and Shelly Chaiken, “An Attribution Analysis of the Effect of
Communicator Characteristics on Opinion Change: The Case of Communicator
Attractiveness,” Journal of Personality and Social Psychology, Vol. 32, No. 1, 1975,
pp. 136–144.
Edwards, George C., III, On Deaf Ears: The Limits of the Bully Pulpit, New Haven,
Conn.: Yale University Press, 2006.
Edwards, Kari, and Edward E. Smith, “A Disconfirmation Bias in the Evaluation
of Arguments,” Journal of Personality and Social Psychology, Vol. 71, No. 1, 1996,
pp. 5–24.
Engber, Daniel, “LOL Something Matters,” Slate, January 3, 2018.
Epley, Nicholas, and Thomas Gilovich, “The Mechanics of Motivated Reasoning,”
Journal of Economic Perspectives, Vol. 30, No. 3, 2016, pp. 133–140.
Evans, Gareth, “Robotic Insects Add Whole New Meaning to ‘Fly-on-the-Wall’
Surveillance,” Army Technology, March 16, 2015. As of January 23, 2019:
https://www.army-technology.com/features/featurerobotic-insects-add-whole-new-
meaning-to-fly-on-the-wall-surveillance-4531866/
“Eye in the Sky,” Radiolab, June 18, 2015. As of January 23, 2019:
http://www.radiolab.org/story/eye-sky/
“Fake News: You Ain’t Seen Nothing Yet,” The Economist, July 1, 2017. As of
January 22, 2019:
https://www.economist.com/news/science-and-technology/21724370-generating-
convincing-audio-and-video-fake-events-fake-news-you-aint-seen
Farrell, Henry, “American Democracy Is an Easy Target,” Foreign Policy,
January 17, 2018. As of January 23, 2019:
http://foreignpolicy.com/2018/01/17/american-democracy-was-asking-for-it/
Fazio, Lisa K., Nadia M. Brashier, B. Keith Payne, and Elizabeth J. Marsh,
“Knowledge Does Not Protect Against Illusory Truth,” Journal of Experimental
Psychology, Vol. 144, No. 5, 2015, pp. 993–1002.
Ferran, Lee, “Beware the Coming Crisis of ‘Deep Fake News,’” RealClearLife,
July 27, 2018. As of January 23, 2019:
http://www.realclearlife.com/technology/
liars-dividend-beware-the-coming-crisis-of-deep-fake-news/#1
Flaxman, Seth, Sharad Goel, and Justin M. Rao, “Filter Bubbles, Echo Chambers,
and Online News Consumption,” Public Opinion Quarterly, Volume 80, No. S1,
2016, pp. 298–320.
Fleming, Jennifer, “Media Literacy, News Literacy, or News Appreciation? A Case
Study of the News Literacy Program at Stony Brook University,” Journalism and
Mass Communication Educator, Vol. 69, No. 2, 2014, pp. 146–165.
References 177
Fletcher, Richard, Alessio Cornia, Lucas Graves, and Rasmus Kleis Neilsen,
Measuring the Reach of “Fake News” and Online Disinformation in Europe, Oxford,
United Kingdom: Reuters Institute for the Study of Journalism, University of
Oxford, February 2018.
Flynn, D. J., Brendan Nyhan, and Jason Reifler, “The Nature and Origins of
Misperceptions: Understanding False and Unsupported Beliefs About Politics,”
Political Psychology, Vol. 38, Supp. 1, 2017, pp. 127–150.
Fukuyama, Francis, Trust: The Social Virtues and the Creation of Prosperity, New
York: Free Press, 1995.
Gal, David, and Derek D. Rucker, “When in Doubt, Shout! Paradoxical
Influences of Doubt on Proselytizing,” Psychological Science, Vol. 21, No. 11, 2010,
pp. 1701–1707.
Gallup, In Depth: Topics A to Z: Confidence in Institutions, 2018. As of January 24,
2019:
https://news.gallup.com/poll/1597/confidence-institutions.aspx
Gans, John A., Jr., “Governing Fantasyland,” Survival, Vol. 60, No. 3,
June-July 2018, pp. 195–202.
Garrett, R. Kelly, “Echo Chambers Online?: Politically Motivated Selective
Exposure Among Internet News Users,” Journal of Computer-Mediated
Communication, Vol. 14, No. 2, 2009a, pp. 265–285.
———, “Politically Motivated Reinforcement Seeking: Reframing the Selective
Exposure Debate,” Journal of Communication, Vol. 59, No. 4, 2009b, pp. 676–699.
Garrett, R. Kelly, Dustin Carnahan, and Emily K. Lynch, “A Turn Toward
Avoidance? Selective Exposure to Online Political Information, 2004–2008,”
Political Behavior, Vol. 35, No. 1, 2013, pp. 113–134.
Garrett, R. Kelly, and Paul Resnick, “Resisting Political Fragmentation on the
Internet,” Daedalus, Vol. 140, No. 4, Fall 2011, pp. 108–120.
Gentzkow, Matthew, and Jesse M. Shapiro, “Ideological Segregation Online and
Offline,” Quarterly Journal of Economics, Vol. 126, No. 4, 2011, pp. 1799–1839.
Gholipour, Bahar, “New AI Tech Can Mimic Any Voice,” Scientific American,
May 2, 2017. As of January 23, 2019:
https://www.scientificamerican.com/article/new-ai-tech-can-mimic-any-voice/
Giddens, Anthony, The Consequences of Modernity, Cambridge, United Kingdom:
Polity, 1990.
Glanville, Jennifer L., Matthew A. Andersson, and Pamela Paxton, “Do Social
Connections Create Trust? An Examination Using New Longitudinal Data,”
Social Forces, Vol. 92, No. 2, 2013, pp. 545–562.
178 The Emerging Risk of Virtual Societal Warfare
Karras, Tero, Timo Aila, Samuli Laine, and Jaakko Lehtinen, “Progressive
Growing of GANs for Improved Quality, Stability, and Variation,” Sixth
International Conference on Learning Representations, Conference Proceedings,
Vancouver, Canada, 2018.
Kavanagh, Jennifer, and Michael D. Rich, Truth Decay: An Initial Exploration of
the Diminishing Role of Facts and Analysis in American Public Life, Santa Monica,
Calif.: RAND Corporation, RR-2314-RC, 2018. As of March 4, 2019:
https://www.rand.org/pubs/research_reports/RR2314.html
Kelman, Herbert C., “Attitudes Are Alive and Well and Gainfully Employed in
the Sphere of Action,” American Psychologist, Vol. 29, No. 5, 1974, pp. 310–324.
Kilovaty, Ido, “Doxfare: Politically Motivated Leaks and the Future of the Norm
on Non-Intervention in the Era of Weaponized Information,” Harvard National
Security Journal, Vol. 9, No. 1, 2018, pp. 146–179.
Knack, Stephen, and Philip Keefer, “Does Social Capital Have an Economic
Payoff? A Cross-Country Investigation,” Quarterly Journal of Economics, Vol. 112,
No. 4, 1997, pp. 1251–1288.
Knight, Will, “The Dark Secret at the Heart of AI,” MIT Technology Review,
April 11, 2017. As of January 22, 2019:
https://www.technologyreview.com/s/604087/the-dark-secret-at-the-heart-of-ai/
———, “The US Military Is Funding an Effort to Catch Deepfakes and Other AI
Trickery,” MIT Technology Review, May 23, 2018a.
———, “This AI Program Could Beat You in an Argument—But It Doesn’t
Know What It’s Saying,” MIT Technology Review, June 19, 2018b. As of January
22, 2019:
https://www.technologyreview.com/s/611487/this-ai-program-could-beat-you-in-
an-argumentbut-it-doesnt-know-what-its-saying/
———, “Hordes of Research Robots Could Be Hijacked for Fun and Sabotage,”
MIT Technology Review, July 24, 2018c.
———, “The Defense Department Has Produced the First Tools for Catching
Deep Fakes,” MIT Technology Review, August 7, 2018d.
———, “These Incredibly Realistic Fake Faces Show How Algorithms Can Now
Mess with Us,” MIT Technology Review, December 14, 2018e.
Kolbert, Elizabeth, “Why Facts Don’t Change Our Minds,” New Yorker,
February 27, 2017. As of January 20, 2019:
https://www.newyorker.com/magazine/2017/02/27/
why-facts-dont-change-our-minds
182 The Emerging Risk of Virtual Societal Warfare
Mercier, Hugo, and Dan Sperber, The Enigma of Reason, Cambridge, Mass.:
Harvard University Press, 2017.
184 The Emerging Risk of Virtual Societal Warfare
Metz, Cade, “Finally, Neural Networks That Actually Work,” Wired, April 21,
2015.
Metz, Cade, and Keith Collins, “How an A.I. ‘Cat-and-Mouse Game’ Generates
Believable Fake Photos,” New York Times, January 2, 2018. As of January 23,
2019:
https://www.nytimes.com/interactive/2018/01/02/technology/
ai-generated-photos.html
Meyer, Jared, “The Ignorant Voter,” Forbes, June 27, 2016. As of January 24, 2019:
https://www.forbes.com/sites/jaredmeyer/2016/06/27/
american-voters-are-ignorant-but-not-stupid/#3d425e67ff17
Mitzen, Jennifer, “Ontological Security in World Politics: State Identity and the
Security Dilemma,” European Journal of International Relations, Vol. 12, No. 3,
2006, pp. 341–370.
Moore, James W., “What Is the Sense of Agency and Why Does It Matter?”
Frontiers in Psychology, Vol. 7, Article 1272, August 2016.
Müller, Jan-Werner, What Is Populism? Philadelphia, Pa.: University of
Pennsylvania Press, 2016.
Munro, Geoffrey D., and Peter H. Ditto, “Biased Assimilation, Attitude
Polarization, and Affect in Reactions to Stereotype-Relevant Scientific
Information,” Personality and Social Psychology Bulletin, Vol. 23, No. 6, 1997,
pp. 636–653.
Mutz, Diana C., and Paul S. Martin, “Facilitating Communication Across Lines
of Political Difference: The Role of Mass Media,” American Political Science
Review, Vol. 95, No. 1, 2001, pp. 97–114.
Nagle, Angela, Kill All Normies: Online Culture Wars from 4chan and Tumblr to
Trump and the Alt-Right, Winchester, United Kingdom: Zero Books, 2017.
National Public Radio, “Trust and Consequences,” TED Radio Hour, May 15,
2015. As of January 23, 2019:
https://www.npr.org/programs/ted-radio-hour/406238794/trust-and-consequences
———, “Big Data Revolution,” TED Radio Hour, September 9, 2016. As of
January 23, 2019:
https://www.npr.org/programs/ted-radio-hour/492296605/big-data-revolution
———, “How 5 Tech Giants Have Become More Like Governments Than
Companies,” Fresh Air, October 26, 2017. As of January 20, 2019:
https://www.npr.org/2017/10/26/560136311/
how-5-tech-giants-have-become-more-like-governments-than-companies
Neef, Dale, Digital Exhaust, Upper Saddle River, N.J.: Pearson FT Press, 2014.
References 185
———, “The Roles of Information Deficits and Identity Threat in the Prevalence
of Misperceptions,” Journal of Elections, Public Opinion and Parties, 2018,
pp. 1–23.
O’Neil, Cathy, Weapons of Math Destruction: How Big Data Increases Inequality
and Threatens Democracy, New York: Broadway Books, 2016.
Osnos, Evan, “Making China Great Again,” New Yorker, January 8, 2018.
Pardes, Arielle, “What My Personal Chat Bot Is Teaching Me About AI’s Future,”
Wired, November 12, 2017. As of January 22, 2019:
https://www.wired.com/story/
what-my-personal-chat-bot-replika-is-teaching-me-about-artificial-intelligence/
Pariser, Eli, The Filter Bubble: What the Internet Is Hiding from You, New York:
Penguin Press, 2011.
Pasquale, Frank, The Black Box Society: The Secret Algorithms That Control Money
and Information, Cambridge, Mass.: Harvard University Press, 2015.
Paul, Christopher, and Miriam Matthews, “The Russian ‘Firehose of Falsehood’
Propaganda Model: Why It Might Work and Options to Counter It,” Santa
Monica, Calif.: RAND Corporation, PE-198-OSD, 2016. As of March 4, 2019:
https://www.rand.org/pubs/perspectives/PE198.html
Pentland, Alex, Social Physics: How Good Ideas Spread—The Lessons from a New
Science, New York: Penguin Press, 2014.
Perez, Sarah, “Voice-Enabled Smart Speakers to Reach 55% of U.S. Households by
2022, Says Report,” TechCrunch, November 8, 2017.
Persily, Nathaniel, “Can Democracy Survive the Internet?” Journal of Democracy,
Vol. 28, No. 2, April 2017.
Petty, Richard E., Russell H. Fazio, and Pablo Briñol, eds., Attitudes: Insights from
the New Implicit Measures, New York: Psychology Press, 2008.
Pew Research Center, Public Knowledge of Current Affairs Little Changed by News
and Information Revolutions, Washington, D.C., April 15, 2007.
Phillips, Whitney, This Is Why We Can’t Have Nice Things: Mapping the
Relationship Between Online Trolling and Mainstream Culture, Cambridge, Mass.:
MIT Press, 2015.
Pierce, David, “Enjoy Your New Virtual Office,” Wired, February 2018.
Pollard, Neal A., Adam Segal, and Matthew G. Devost, “Trust War: Dangerous
Trends in Cyber Conflict,” War on the Rocks, January 16, 2018. As of January 23,
2019:
https://warontherocks.com/2018/01/trust-war-dangerous-trends-cyber-conflict/
Pratkanis, Anthony R., and Elliot Aronson, Age of Propaganda: The Everyday Use
and Abuse of Persuasion, New York: Henry Holt and Company, 2001.
References 187
Price, Rob, “AI and CGI Will Transform Information Warfare, Boost Hoaxes, and
Escalate Revenge Porn,” Business Insider, August 12, 2017. As of January 23, 2019:
http://www.businessinsider.com/
cgi-ai-fake-video-audio-news-hoaxes-information-warfare-revenge-porn-2017-8
Putnam, Robert, Bowling Alone: The Collapse and Revival of American Community,
New York: Simon and Schuster, 2001.
Rabin-Havt, Ari, Lies, Incorporated: The World of Post-Truth Politics, New York:
Anchor Books, 2016.
Radinsky, Kira, “Your Algorithms Are Not Safe from Hackers,” Harvard Business
Review, January 5, 2016. As of January 22, 2019:
https://hbr.org/2016/01/your-algorithms-are-not-safe-from-hackers
Redlawsk, David P., “Hot Cognition or Cool Consideration? Testing the Effects
of Motivated Reasoning on Political Decision Making,” Journal of Politics, Vol. 64,
No. 4, 2002, pp. 1021–1044.
Redlawsk, David P., Andrew J. W. Civettini, and Karen M. Emmerson, “The
Affective Tipping Point: Do Motivated Reasoners Ever ‘Get It’?” Political
Psychology, Vol. 31, No. 4, 2010, pp. 563–593.
Richardson, John H., “AI Chatbots Try to Schedule Meetings—Without Enraging
Us,” Wired, May 24, 2018. As of January 22, 2019:
https://www.wired.com/story/xai-meeting-ai-chatbot/
Robbins, Blaine G., “Institutional Quality and Generalized Trust: A Nonrecursive
Causal Model,” Social Indicators Research, Vol. 107, No. 2, 2012, pp. 235–258.
Roberts, Graham, “Augmented Reality: How We’ll Bring the News into Your
Home,” New York Times, February 1, 2018. As of January 24, 2019:
https://www.nytimes.com/interactive/2018/02/01/sports/olympics/
nyt-ar-augmented-reality-ul.html
Rodriguez, Ashley, “In Five Years, VR Could Be as Big in the US as Netflix,”
Quartz, June 6, 2018. As of January 22, 2019:
https://qz.com/1298512/
vr-could-be-as-big-in-the-us-as-netflix-in-five-years-study-shows/
Romano, Andrew, “How Ignorant Are Americans?” Newsweek, March 20, 2011.
Roose, Kevin, “Here Come the Fake Videos, Too,” New York Times, March 4,
2018. As of January 23, 2019:
https://www.nytimes.com/2018/03/04/technology/fake-videos-deepfakes.html
Rose, Flemming, and Jacob Mchangama, “History Proves How Dangerous It Is to
Have the Government Regulate Fake News,” Washington Post, October 3, 2017. As
of January 23, 2019:
https://www.washingtonpost.com/news/theworldpost/wp/2017/10/03/
history-proves-how-dangerous-it-is-to-have-the-government-regulate-fake-news/
188 The Emerging Risk of Virtual Societal Warfare
Ross, Alec, The Industries of the Future, New York: Simon and Schuster, 2016.
Ross, Lee D., Mark R. Lepper, Fritz Strack, and Julia Steinmetz, “Social
Explanation and Social Expectation: Effects of Real and Hypothetical
Explanations on Subjective Likelihood,” Journal of Personality and Social
Psychology, Vol. 35, No. 11, 1977, pp. 817–829.
Rothman, Joshua, “Afterimage,” New Yorker, November 12, 2018.
Rothstein, Bo, and Eric M. Uslaner, “All for All: Equality, Corruption, and Social
Trust,” World Politics, Vol. 58, No. 1, 2005, pp. 41–72.
Rubin, Peter, “You’ll Go to Work in Virtual Reality,” Wired, June 2018, p. 61.
Sabelman, Eric E., and Roger Lam, “The Real-Life Dangers of Augmented
Reality,” IEEE Spectrum, June 23, 2015. As of January 22, 2019:
https://spectrum.ieee.org/consumer-electronics/portable-devices/
the-reallife-dangers-of-augmented-reality
Sanger, David E., David D. Kirkpatrick, and Nicole Perlroth, “The World Once
Laughed at North Korean Cyberpower. No More,” New York Times, October 15,
2017. As of January 23, 2019:
https://www.nytimes.com/2017/10/15/world/asia/
north-korea-hacking-cyber-sony.html
Scharre, Paul, and Michael C. Horowitz, Artificial Intelligence: What Every
Policymaker Needs to Know, Washington, D.C.: Center for a New American
Security, June 2018.
Schiff, Stacy, “The Interactive Truth,” New York Times, June 15, 2005. As of
January 23, 2019:
https://www.nytimes.com/2005/06/15/opinion/the-interactive-truth.html
Schneier, Bruce, Click Here to Kill Everybody: Security and Survival in a Hyper-
Connected World, New York: W. W. Norton, 2018.
Segal, Adam, The Hacked World Order: How Nations Fight, Trade, Maneuver, and
Manipulate in the Digital Age, New York: PublicAffairs, 2016.
Selk, Avi, “This Audio Clip of a Robot as Trump May Prelude a Future of Fake
Human Voices,” Washington Post, May 3, 2017. As of January 22, 2019:
https://www.washingtonpost.com/news/innovations/wp/2017/05/03/
this-audio-clip-of-trump-as-a-robot-may-prelude-a-future-of-fake-human-voices/
Sen, Conor, “The ‘Big Five’ Could Destroy the Tech Ecosystem,” Bloomberg,
November 15, 2017. As of January 20, 2019:
https://www.bloomberg.com/view/articles/2017-11-15/
the-big-five-could-destroy-the-tech-ecosystem
References 189
Stafford, Tom, “Psychology: Why Bad News Dominates the Headlines,” BBC,
July 29, 2014. As of January 20, 2019:
http://www.bbc.com/future/story/20140728-why-is-all-the-news-bad
Stone, Peter, Rodney Brooks, Erik Brynjolfsson, Ryan Calo, Oren Etzioni, Greg
Hager, Julia Hirschberg, Shivaram Kalyanakrishnan, Ece Kamar, Sarit Kraus,
Kevin Leyton-Brown, David Parkes, William Press, AnnaLee Saxenian, Julie
Shah, Milind Tambe, and Astro Teller, Artificial Intelligence and Life in 2030: One
Hundred Year Study on Artificial Intelligence, Stanford, Calif.: Stanford University,
September 2016.
Strange, Adario, “Face-Tracking Software Lets You Make Anyone Say Anything in
Real Time,” Mashable, March 20, 2016. As of January 23, 2019:
http://mashable.com/2016/03/20/face-tracking-software/#U3o58PfqH8qI
Stubbersfield, Joseph M., Jamshid J. Tehrani, and Emma G. Flynn, “Serial Killers,
Spiders and Cybersex: Social and Survival Information Bias in the Transmission of
Urban Legends,” British Journal of Psychology, Vol. 106, No. 2, 2015, pp. 288–307.
Suh, Chan S., Paul Y. Chang, and Yisook Lim, “Spill-Up and Spill-Over of Trust:
An Extended Test of Cultural and Institutional Theories of Trust in South Korea,”
Sociological Forum, Vol. 27, No. 2, 2012, pp. 504–526.
Sunstein, Cass R., #Republic: Divided Democracy in the Age of Social Media,
Princeton, N.J.: Princeton University Press, 2017.
Swift, Art, “Americans’ Trust in Mass Media Sinks to New Low,” Gallup,
September 14, 2016.
Taber, Charles S., and Milton Lodge, “Motivated Skepticism in the Evaluation
of Political Beliefs,” American Journal of Political Science, Vol. 50, No. 3, 2006,
pp. 755–769.
Tan, Chenhao, Vlad Niculae, Cristian Danescu-Niculescu-Mizil, and Lilian
Lee, “Winning Arguments: Interaction Dynamics and Persuasion Strategies in
Good-Faith Online Discussions,” 25th International World Wide Web Conference,
Conference Proceedings, Montreal, Canada: International World Wide Web
Conferences Steering Committee, April 2016, pp. 613–624.
Taub, Amanda, “The Real Story About Fake News Is Partisanship,” New York
Times, January 11, 2017. As of January 23, 2019:
https://www.nytimes.com/2017/01/11/upshot/the-real-story-about-fake-news-is-
partisanship.html
Thies, Jutus, Michael Zollhöfer, Marc Stamminger, Christian Theobalt, and
Matthias Nießner, “Face2Face: Real-Time Face Capture and Reenactment of RGB
Videos,” IEEE Conference on Computer Vision and Pattern Recognition, Conference
Proceedings, Las Vegas, Nev.: Institute of Electrical and Electronics Engineers,
2016.
References 191
Thompson, Nicholas, and Fred Vogelstein, “Facebook’s Two Years of Hell,” Wired,
March 2018.
Thorson, Emily, “Belief Echoes: The Persistent Effects of Corrected
Misinformation,” Political Communication, Vol. 33, No. 3, 2016, pp. 460–480.
Thussu, D. K., News as Entertainment: The Rise of Global Infotainment, London:
SAGE Publications, 2007.
Tiku, Nitasha, “We’ll Share Our Emotional State as Willingly as We Share Our
Photos,” Wired, June 2018.
Tormala, Zakary L., and Richard E. Petty, “What Doesn’t Kill Me Makes Me
Stronger: The Effects of Resisting Persuasion on Attitude Certainty,” Journal of
Personal and Social Psychology, Vol. 83, No. 6, 2002.
———, “Source Credibility and Attitude Certainty: A Metacognitive Analysis of
Resistance to Persuasion,” Journal of Consumer Psychology, Vol. 14, No. 4, 2004,
pp. 427–442.
Tracy, Liz, “In Contrast to Tay, Microsoft’s Chinese Chatbot, Xiaolce, Is Actually
Pleasant,” Inverse, March 26, 2016. As of January 23, 2019:
https://www.inverse.com/article/13387-microsoft-chinese-chatbot
Tucker, Patrick, “Strava’s Just the Start: The US Military’s Losing War Against
Data Leakage,” Defense One, January 31, 2018.
U.S. Department of Defense, Joint Publication 1-02: Department of Defense
Dictionary of Military and Associated Terms, Washington, D.C., November 2010.
Ullah, Haroon K., Digital World War: Islamists, Extremists, and the Fight for Cyber
Supremacy, New Haven, Conn.: Yale University Press, 2017.
United States Code, Title 47, Section 230, Protection for Private Blocking and
Screening of Offensive Material, January 3, 2012.
Uslaner, Eric M., “Where You Stand Depends upon Where Your Grandparents
Sat: The Inheritability of Generalized Trust,” Public Opinion Quarterly, Vol. 72,
No. 4, 2008, pp. 725–740.
van den Oord, Aäron, Tom Walters, and Trevor Strohman, “WaveNet Launches in
the Google Assistant,” DeepMind Blog, October 4, 2017. As of January 23, 2019:
https://deepmind.com/blog/wavenet-launches-google-assistant/
van der Linden, Sander, Anthony Leiserowitz, Seth Rosenthal, and Edward
Maibach, “Inoculating the Public Against Misinformation About Climate
Change,” Global Challenges, Vol. 1, No. 2, 2017.
Vanian, Jonathan, “Facebook, Twitter Take New Steps to Combat Fake News and
Manipulation,” Fortune, January 20, 2018. As of January 24, 2019:
http://fortune.com/2018/01/19/facebook-twitter-news-feed-russia-ads/
192 The Emerging Risk of Virtual Societal Warfare
Verghese, Abraham, “How Tech Can Turn Doctors into Clerical Workers,” New
York Times Magazine, May 16, 2018.
Vettehen, P. H., and M. Kleemans, “Proving the Obvious? What Sensationalism
Contributes to the Time Spent on News Video,” Electronic News, Vol. 12, No. 2,
2018, pp. 113–127.
Vincent, James, “Artificial Intelligence Is Going to Make It Easier Than Ever to
Fake Images and Video,” The Verge, December 20, 2016. As of January 23, 2019:
https://www.theverge.com/2016/12/20/14022958/
ai-image-manipulation-creation-fakes-audio-video
VivoText, homepage, undated. As of March 6, 2019:
https://www.vivotext.com/
Vlahos, James, “Fighting Words,” Wired, March 2018.
Vosoughi, Soroush, Deb Roy, and Sinan Aral, “The Spread of True and False
News Online,” Science, Vol. 359, No. 6380, 2018, pp. 1146–1151.
Waltzman, Rand, “The Weaponization of Information: The Need for Cognitive
Security,” testimony presented before the Senate Armed Services Committee,
Subcommittee on Cybersecurity, Santa Monica, Calif.: RAND Corporation,
CT-473, April 27, 2017. As of March 5, 2019:
https://www.rand.org/pubs/testimonies/CT473.html
Weaver, Kimberlee, Stephen M. Garcia, Norbert Schwarz, and Dale T. Miller,
“Inferring the Popularity of an Opinion from Its Familiarity: A Repetitive Voice
Can Sound Like a Chorus,” Journal of Personality and Social Psychology, Vol. 92,
No. 5, 2007, pp. 821–833.
West, Darrell M., “Will Robots and AI Take Your Job? The Economic and
Political Consequences of Automation,” Brookings Institution, April 18, 2018. As of
January 22, 2019:
https://www.brookings.edu/blog/techtank/2018/04/18/will-robots-and-ai-take-
your-job-the-economic-and-political-consequences-of-automation/
Wharton, Bruce, “Remarks on ‘Public Diplomacy in a Post-Truth Society,’” in
Shawn Powers and Markos Kounalakis, eds., Can Public Diplomacy Survive the
Internet? Washington, D.C.: U.S. Advisory Commission on Public Diplomacy,
2017, pp. 7–11.
Whitson, Jennifer, and Adam Galinsky, “Lacking Control Increases Illusory
Pattern Perception,” Science, Vol. 322, No. 5898, 2008, pp. 115–117.
Witte, Kim, “Putting the Fear Back into Fear Appeals: The Extended Parallel
Process Model,” Communication Monographs, Vol. 59, No. 4, 1992, pp. 329–349.
Witte, Kim, and Mike Allen, “A Meta-Analysis of Fear Appeals: Implications
for Effective Public Health Campaigns,” Health Education and Behavior, Vol. 27,
No. 5, 2000, pp. 591–615.
References 193
Wittes, Benjamin, and Gabriella Blum, The Future of Violence: Robots and Germs,
Hackers and Drones, New York: Basic Books, 2015.
Wu, Hao-Yu, Michael Rubinstein, Eugene Shih, John Guttag, Frédo Durand, and
William Freeman, “Eulerian Video Magnification for Revealing Subtle Changes in
the World,” ACM Transactions on Graphics, Vol. 31, No. 4, 2012, pp. 1–8.
Wu, Tim, The Attention Merchants: The Epic Scramble to Get Inside Our Heads,
New York: Vintage Books, 2016.
Yam, CY, “Emotion Detection and Recognition from Text Using Deep Learning,”
Microsoft Developer Blog, November 29, 2015. As of January 22, 2019:
https://www.microsoft.com/developerblog/2015/11/29/
emotion-detection-and-recognition-from-text-using-deep-learning/
Yearsley, Liesl, “We Need to Talk About the Power of AI to Manipulate Humans,”
MIT Technology Review, June 5, 2017. As of January 22, 2019:
https://www.technologyreview.com/s/608036/
we-need-to-talk-about-the-power-of-ai-to-manipulate-humans/
You, Jong-sung, “Social Trust: Fairness Matters More Than Homogeneity,”
Political Psychology, Vol. 33, No. 5, 2012, pp. 701–721.
Youyou, Wu, Michael Kosinski, and David Stilwell, “Computer-Based Personality
Judgments Are More Accurate Than Those Made by Humans,” Proceedings of the
National Academy of Sciences, Vol. 112, No. 4, 2015, pp. 1036–1040.
Zhang, Han, Tao Xu, Hongsheng Li, Shaoting Zhang, Xiaogang Wang, Xiaolei
Huang, and Dimitris Metaxas, “StackGAN: Text to Photo-Realistic Image
Synthesis with Stacked Generative Adversarial Networks,” IEEE International
Conference on Computer Vision, Conference Proceedings, Venice, Italy: Institute of
Electrical and Electronics Engineers, October 2017.
Zmerli, Sonja, and Ken Newton, “Social Trust and Attitudes Toward Democracy,”
Public Opinion Quarterly, Vol. 72, No. 4, 2008, pp. 706–724.
The evolution of advanced information environments is rapidly creating a new
category of possible cyberaggression that involves efforts to manipulate or
disrupt the information foundations of the effective functioning of economic
and social systems. RAND researchers are calling this growing threat virtual
societal warfare in an analysis of its characteristics and implications for the future.
To understand the risk of virtual societal warfare, the authors surveyed evidence in
a range of categories to sketch out some initial contours of how these techniques
might evolve in the future. They grounded the assessment in (1) detailed research
on trends in the changing character of the information environment in the United
States and other advanced democracies; (2) the insights of social science research
on attitudes and beliefs; and (3) developments in relevant emerging technologies
that bear on the practices of hostile social manipulation and its more elaborate
and dangerous cousin, virtual societal warfare. The authors then provide three
scenarios for how social manipulation could affect advanced societies over the
next decade. The analysis suggests an initial set of characteristics that can help
define the emerging challenge of virtual societal warfare, including that national
security will increasingly rely on a resilient information environment and a strong
social topography, and that conflict will increasingly be waged between and
among networks. Although more research is urgently required, the authors
conclude by pointing to several initial avenues of response to enhance democratic
resilience in the face of this growing risk, including by building forms of
inoculation and resilience against the worst forms of information-based social
manipulation and by better understanding the workings and vulnerabilities of
emerging technologies.
www.rand.org $30.00
ISBN-10 1-9774-0272-0
ISBN-13 978-1-9774-0272-1
53000