Hopp - 2020 - Why Do People Share Ideologically Extreme, False
Hopp - 2020 - Why Do People Share Ideologically Extreme, False
Hopp - 2020 - Why Do People Share Ideologically Extreme, False
ORIGINAL RESEARCH
Recently, substantial attention has been paid to the spread of highly partisan and often
factually incorrect information (i.e., so-called “fake news”) on social media. In this study,
we attempt to extend current knowledge on this topic by exploring the degree to which
individual levels of ideological extremity, social trust, and trust in the news media are
associated with the dissemination of countermedia content, or web-based, ideologically
extreme information that uses false, biased, misleading, and hyper-partisan claims to
counter the knowledge produced by the mainstream news media. To investigate these
possible associations, we used a combination of self-report survey data and trace data
collected from Facebook and Twitter. The results suggested that sharing countermedia
content on Facebook is positively associated with ideological extremity and negatively
associated with trust in the mainstream news media. On Twitter, we found evidence that
countermedia content sharing is negatively associated with social trust.
Keywords: Fake News, Countermedia, Social Media, Survey, Trace Data, Computational
Social Sciences
doi:10.1093/hcr/hqz022
Human Communication Research 00 (2020) 1–28 © The Author(s) 2020. Published by Oxford University Press on behalf
1
of International Communication Association. All rights reserved. For permissions, please e-mail:
journals.permissions@oup.com
Countermedia Content Sharing T. Hopp et al.
Most descriptions of the functions of the media in the 20th century implicitly utilize
the theoretical visioning of the public sphere cultivated by Habermas (1991). Essential
to these accounts of the public sphere’s proper functioning is the idea of gatekeeping,
which is the process by which a piece of information gets selected, altered, and mor-
phed into a digestible message suitable for large-scale distribution and consumption
(Shoemaker & Vos, 2009). During the gatekeeping process, a journalist gathers as
much information as possible and then selects—based on a host of factors—the pieces
of information most important for the democratic foundation of a society. Thus,
for at least the last several decades, journalists have normatively engaged in a set of
practices where they objectively assess information and provide a truthful, good-faith
rendering of reality that represents all legitimate viewpoints (Schudson, 2001). In this
way, the mainstream media assumed the role of an epistemic authority within the
dominant public sphere (Bruns, 2018).
Countermedia
Recently, scholars (e.g., Noppari, Hiltunen, & Ahva, 2019; Wasilewski, 2019;
Ylä-Anttila, 2018; Ylä-Anttila et al., 2019) have argued that information commonly
referred to as fake news in the popular and academic literatures may be better
understood as countermedia content. Numerous difficulties arise when seeking to
precisely conceptualize fake news as a social phenomenon. First, the word “fake”
connotes intentionality: a manifest attempt on the part of the message creator to
spread untrue information. This results in obvious issues, as it is often impossible to
objectively assess communicator intent. Second, the phrase “fake news” lends itself
to politicization (Lazer et al., 2018). Indeed, one recent study suggested that the
use of the phrase may hamper audiences’ ability to distinguish news content from
false political information (Van Duyn & Collier, 2018). Third, social and political
events are often subject to interpretation (Guess, Nagler, & Tucker, 2019). In many
cases, it is difficult to establish specific criteria for what constitutes objectively false
information. Fourth, many websites that publish false political information also
“distort genuine news reports, and copy or repurpose content from other outlets” and
“selectively amplify political events in an over-the-top style that flatters the prejudices
of a candidate’s supporters” (Guess, Nyhan, & Reifler, 2018, p. 4). Classifying all
One important aspect of the selective sharing perspective relates to social media’s
capacity to afford users with the ability to take corrective actions (Liang, 2018; Shin
& Thorson, 2017). Corrective actions are reactive behaviors that involve individual
actors “correcting” facts and narratives derived from the mainstream news media
that are perceived to be faulty, incorrect, incomplete, or biased (Rojas, 2010). These
Platform differences
In this work, we focus on Facebook and Twitter, two of the most popular social
media platforms in the United States for political discussion. It is possible—and
perhaps even probable—that countermedia sharing behaviors differ across social
media platforms. Research on fake news has shown that Facebook specifically serves
an especially important facilitator in the spread of fake news (Allcott & Gentzkow,
2017; Guess et al., 2019; Guess et al., 2018). Separately, cross-platform studies suggest
that social media sites have different technological and social capabilities, and that
these factors may, together, result in usage behaviors (e.g., Shane-Simpson, Manago,
& Gillespie-Lynch, 2018). However, while it may be likely that countermedia content
sharing behaviors differ across platforms, the research as it currently stands does not
provide enough information to form specific hypotheses pertaining to the degree
to which associations between the independent variables of primary interest and
countermedia content sharing may differ across Facebook and Twitter. Nonetheless,
the question is potentially important.
Method
This study combined digital trace data collected on Facebook and Twitter with
self-report data obtained via a survey. Study recruitment was accomplished using
Qualtrics. Four sample controls were enforced: an approximate 50/50 gender split1 ;
a requirement that respondents be active users of both Facebook and Twitter2 ;
a requirement that respondents be current U.S. citizens; and a requirement that
respondents be 18 years or older. Given the exploratory nature of the current study,
we also requested in the recruitment that respondents talk about politics online at
least monthly. However, due to method-based limitations, this was not enforced as a
sample inclusion requirement.3
Before participating in the study, respondents were informed of the requirements
governing participation and provided with a consent form that articulated the
parameters of data collection. Regarding the collection of social media data,
respondents were told that social media messages “will be collected anonymously,
and at no time will the researchers know your identity, or the identities of your
friends. The data will solely be used to better understand how you share news
on social media.” Upon agreement to the terms of the study, respondents were
asked to authorize a custom application that was used to harvest their social media
data. After authorizing the application, respondents were piped into the survey
environment, which was hosted on Qualtrics’ servers. Self-report and social media
data were joined using an anonymous identification code that was assigned by
the web application. The following information was extracted from the Facebook
API: mobile_status_update, created_note, shared_story, created_event, wall_post,
app_created_story, and published_story. We did not capture newsfeed or friend
information. In the case of Twitter, only the content of posted tweets was retained.
In the cases of both Facebook and Twitter, our data extraction processes allowed us
to assess text and links posted by users. Our institutional review board approved all
study procedures. All data collection processes fully conformed with both platforms’
terms of service at the time of execution. Data collection occurred in the period
between 7 March 2017 and 6 June 2017. Social media data was downloaded on 6 June
2017. This means that the obtained social media data for each user was exhaustive
from their first post on the platform up until the harvest date.
Measures
Countermedia content sharing
Countermedia sharing frequency was assessed using a list-matching procedure. To
develop a comprehensive list of countermedia sites, we first generated a corpus of
domains by pooling lists of so-called fake news domains published by About.com,
Aloisius, CBS News, The Daily Dot, Fake News Watch, Fake News Checker, Melissa
Zimdars, NPR, Snopes Field Guide, The New Republic, and U.S. News and World
Report.7 We manually reviewed the final roster of sites to ensure that sites appearing
on our final list were predominantly focused on political and/or social events and
were not parody accounts (e.g., The Onion, Clickhole). To appear in our list, a site
had to appear on at least two of the identified fake news domain lists. In total,
we identified 106 countermedia websites. A Python script was used to extract all
social media posts that contained articles from the countermedia identified domains.
Content from a total of 62 domains was shared across both platforms (see Supporting
Information Appendix A for more details). To create the final sharing measure,
we summed the number of posts containing countermedia links for each user. In
the Facebook sample, a total of 1,152 countermedia content shares were identified.
The maximum number of pieces of content shared by a single user was 171. The
majority of respondents (71.1%) did not share any countermedia items. The overall
mean number of shares was 1.70 (SD = 8.54). Among the subsample of users who
shared at least one instance of countermedia, the mean number of shares was 5.88
(SD = 15.11). In the Twitter sample, 128 countermedia content shares were identified.
No user shared more than 33 countermedia content items. A large majority (95.0%)
of respondents did not share any countermedia content. The overall countermedia
sharing mean was 0.24 (SD = 1.94). Among those that shared at least one piece of
countermedia content, the mean number of shares was 4.74 (SD = 7.49). Density and
histogram plots showing the countermedia sharing distributions for both Facebook
and Twitter are provided in Supporting Information Appendix B.
Ideological extremity
Ideological attitude extremity was measured by assessing participant conservatism
using three items, all on 7-point scales where higher scores indicated higher levels of
conservatism. These individual items were subsequently recoded into a 4-point scale
where those who selected options at the scale poles were assigned higher numbers
(i.e., a 1 or 7 was coded as a 4, a 2 or 6 was coded as 3, and so on) and collapsed into
a single measure (Facebook sample: M = 2.44, SD = 1.09, α = .95; Twitter sample:
M = 2.45, SD = 1.08, α = .94).
Social trust
Social trust was measured using six items (all on 7-point scales) that, together,
assessed the degree to which the respondent held reciprocal relationships with a
variety of social others, including friends, family members, and members of their
surrounding community. According to Fukuyama (1996), wide-ranging feelings of
mutuality and affinity can be understood as the end-product of those with high
levels of social trust. In this sense, our measure tapped into perceptions pertaining to
accumulated levels of social trust with a variety of others, which, as described above,
have been shown to be an integral part of participation in the public sphere (Facebook
sample: M = 5.29, SD = 1.12, α = .81; Twitter sample: M = 5.32, SD = 1.06, α = .79).
the individual items (Facebook sample: M = 4.07, SD = 1.39, α = .94; Twitter sample:
M = 4.05, SD = 1.39, α = .94).
Control measures
Modeling approach
The distributions of the content sharing created some complications as they pertained
to the selection of appropriate modeling techniques. In the case of Twitter, the
exceptionally small number of cases with non-zero countermedia sharing counts
(n = 27) resulted in concerns related to power and our ability to identify a stable and
unbiased count model to describe the data. Accordingly, we used a logistic regression
model to assess the relationship between the countermedia identity markers and the
odds of sharing one or more countermedia content items on Twitter. In regard to
the mainstream news content sharing frequency, we again observed a relatively small
number of participants with non-zero counts (n = 156). Within this subsample, the
five most active users shared over 75% of all shared content. The relatively small
sample number of non-zero cases, combined with the presence of outlying cases,
again made it difficult to identify a trustworthy modeling approach for the count data;
as such, a binary logistic regression model was employed. In the case of Facebook,
we observed a comparatively greater number of cases with non-zero share counts
for both the countermedia (n = 196) and mainstream news (n = 512) variables. As
such, we were reasonably confident in our ability to model the positive counts in
Results
Table 1 Logistic and Count Components of Negative Binomial Logit Hurdle Predicting
Countermedia Content Sharing on Facebook
Notes: All variance inflation factors are below 2.75. IRR = incidence rate ratio; OR = odds ratio; SM = social media.
†p < .10; ∗ p < .05; ∗∗ p < .001.
b SE OR
Note: All variance inflation factors are below 2.93. OR = odds ratio; SM = social media. †p < .10, ∗ p < .05; ∗∗ p < .001.
Notably, there were several outliers in the Facebook data. Specifically, three users
had high countermedia share counts (58, 95, and 172). To assess the degree to
which these cases affected our reported results, we removed them and re-estimated
the model. The results of the outlier-excluded model were essentially identical: in
the binary component of the model, we observed significant relationships between
sharing one or more countermedia content items and both ideological extremity
(b = .34; SE = .10; p < .001; OR = 1.41) and media trust (b = −.26; SE = .08; p < .001;
OR = .77), while the relationship between sharing at least one countermedia content
item and social trust was negative but not significant at the .05 level (b = −.16;
SE = .09; p = .096; OR = .86). In the count component of the model, we observed
a significant relationship between ideological extremity and countermedia sharing
frequency (b = .33; SE = .16; p = .035; IRR = 1.39). The relationships between sharing
frequency and both social trust (b = −.25; SE = .13; p = .062; IRR = .79) and trust in
the mainstream news media (b = .06; SE = .10; p = .578; IRR = 1.06) were, again, not
statistically significant.
Follow-up analysis
Studies on fake news have generally shown that those self-identifying as very
conservative share the most fake news (Grinberg et al., 2019; Guess et al., 2019; Guess
et al., 2018). However, in our study, we failed to find significant relationships between
countermedia sharing and conservatism in either the Facebook or Twitter models
(Tables 1 and 2). A similar set of results was observed in relation to identification
as a Republican. To further explore this potential discrepancy with extant literature,
we rounded the conservatism measure to the nearest whole number and examined
the summed number of countermedia content shares across the scale range. In the
Facebook sample, those scoring a 7 on the conservatism scale accounted for 25.6%
of all countermedia shares, which was the highest per-category percentage observed.
In the Twitter sample, those scoring a 7 on the conservatism measure accounted
Table 3 Logistic and Count Components of Negative Binomial Logit Hurdle Model Predict-
ing Mainstream News Content Sharing on Facebook
Notes: All variance inflation factors are below 2.75. IRR = incidence rate ratio; OR = odds ratio; SM = social media.
†p < .10, ∗ p < .05, ∗∗ p < .01, ∗∗∗ p < .001.
for 32.0% of all countermedia shares, again accounting for the highest per-category
share percentage. That said, those self-identifying as very liberal (i.e., conservatism
score = 1) also shared substantial amounts of countermedia in both samples: 17.5%
on Facebook and 16.4% on Twitter. In the Facebook sample, those scoring at the
extreme ends of the ideological spectrum shared 43.4% of all countermedia shares,
despite the fact that these categories only represented 22.9% of the analytic sample.
On Twitter, this pattern was nearly identical: 22.8% of the sample was responsible for
48.4% of all countermedia shares.
Results summary
H1 suggested that ideological extremity would be positively related to sharing coun-
termedia on social media. This hypothesis was partially supported, as we observed
positive and statistically significant coefficients for both the binary and count compo-
Table 4 Logistic Regression Model Predicting Mainstream News Content Sharing on Twitter
b SE OR
Notes: All variance inflation factors are below 2.93. OR = odds ratio; SM = social media. †p < .10; ∗ p < .01; ∗∗ p < .001.
nents of the Facebook model and a non-significant coefficient in the Twitter model.
H2 predicted that social trust would be negatively associated with countermedia
content sharing. This hypothesis was partially supported. In the Twitter model,
a negative and statistically significant coefficient was observed. However, in the
Facebook model, the association between social trust and content sharing was not sta-
tistically significant in either the binary or rate components. H3 predicted a negative
relationship between trust in the mainstream media and countermedia sharing. This
hypothesis was, again, partially supported. In the Facebook models, a statistically sig-
nificant and negative association was observed in the logistic component. However,
in the rate component and in the Twitter model, this association was not statistically
different from zero. H4 suggested that the variables of primary interest in the
countermedia models would be comparatively poor predictors of mainstream news
sharing. This hypothesis was broadly supported; looking at both the Facebook and
Twitter models, only one parameter estimate of interest was statistically associated
with mainstream news sharing (ideological extremity was negatively associated with
countermedia content sharing in the rate component of the Facebook model). The
Discussion
A handful of potentially important insights flow from our findings. First, we believe
that our conceptualization of countermedia may be helpful for future research.
Fake news, as a meaningful descriptor of real-world phenomena, is fraught with
issues. Suggested alternatives, such as misinformation and disinformation, require
frequently unknowable knowledge of communicator motives. Countermedia, in con-
trast, neither invokes potentially problematic frames (e.g., the implication that fake
news is a form of “news”) nor requires knowledge of the motivations spurring content
creators. Instead, this concept speaks to the observable epistemic functionality of
communicated content. While the concept certainly needs future refinement, we
believe it serves as an important step towards a more nuanced handling of so-called
fake news.
Second, our focus on the relationship between content sharing and granular,
individual-level political and social factors is noteworthy. The approach used in this
study can be contrasted with other quantitative-empirical studies on the topic, which
have predominantly focused on the demographic patterns underlying democratically
deviant information dissemination (e.g., Grinberg et al., 2019). These studies are
useful and important, but do not directly speak to the nuances of social and political
self. Broadly speaking, the demographic patterns observed in relation to counter-
media sharing were consistent with those observed in recent, large-scale studies
using representative samples to probe fake news sharing (Grinberg et al., 2019; Guess
et al., 2019), particularly as it relates to countermedia sharing and age. At the same
time, our results suggest that a comprehensive understanding of the dissemination
of false, misleading, biased, and hyper-partisan content requires looking beyond
demographic features and party identification and into the sometimes-complex
aspects (e.g., personality traits, socialization factors, cognitive needs) of the political
self. Therein, it should be noted that the independent variables of central interest did a
generally poor job of predicting the rate at which countermedia content was shared in
the Facebook model. Specifically, of these three variables, only ideological extremity
was significant in both the count and rate components of the model. This suggests—
as perhaps is to be expected in light of prior work on countermedia and related forms
of information sharing (Grinberg et al., 2019; Guess et al., 2019)—that ideological
sentiment may play an especially important role in the countermedia identity. More
Facebook is a central conduit for the transfer of fake news. One interpretation of
this frequency-based disparity may be linked to platform-specific affordances. Due
to a combination of service policies (e.g., the so-called “real name policy”; Facebook,
2019), widespread diffusion, “group” spaces, and normative social application, Face-
book may be understood as transmitting a more voluminous amount of personal
Supporting Information
Additional Supporting Information may be found in the online version of this article.
Please note: Oxford University Press is not responsible for the content or functionality
of any supplementary materials supplied by the authors. Any queries (other than
missing material) should be directed to the corresponding author for the article.
Notes
1 We instituted an approximate 50/50 sex split because our prior work with
Qualtrics has shown us that samples without quotas can result in pronounced
sex imbalances.
2 As it pertains to the active user criteria, we required that all users have a current
account with at least 50 pieces of posted content on both platforms. This safeguard
was put in place to avoid scenarios where a user may create an account simply to
qualify for the study.
3 When constructing the recruitment language, our goal was to ensure that partici-
pants were interested in politics generally, as a sample comprised predominantly of
politically ambivalent citizens would be of limited theoretical or practical interest.
Self-report data indicated that the sample (and accompanying sub-samples) were
comprised of politically interested and engaged citizens: the sample average on
the political interest scale (range, 1–7) was 5.27 (SD = 1.44). In all, 87.2% of
respondents had a political interest score equal to or above the composite scale
center point of 4.00.
4 Of those respondents that engaged with the study materials, 13.5% both met our
inclusion criteria and provided valid survey responses.
5 Little’s Missing Completely at Random (MCAR; Little, 1988) test was used to
References
Allcott, H., & Gentzkow, M. (2017). Social media and fake news in the 2016 election. Journal
of Economic Perspectives, 31, 211–236. doi:10.3386/w23089
Ardèvol-Abreu, A., & Gil de Zúñiga, H. (2017). Effects of editorial media bias perception and
media trust on the use of traditional, citizen and social media news. Journalism & Mass
Communication Quarterly, 94, 703–724. doi:10.1177/1077699016654684
Barnidge, M., & Rojas, H. (2014). Hostile media perceptions, presumed media influence, and
political talk: Expanding the corrective action hypothesis. International Journal of Public
Opinion Research, 26, 135–156. doi:10.1093/ijpor/edt032
Bobkowski, P. (2015). Sharing the news: Effects of information utility and opinion lead-
ership on news sharing. Journalism & Mass Communication Quarterly, 92, 320–345.
doi:10.1177/1077699015573194
Brants, K. (2013). Trust, cynicism, and responsiveness: the uneasy situation of journalism
in democracy. In C. Peters & M. Broersma (Eds.), Rethinking journalism (pp. 27–39).
New York, NY: Routledge.
Bruns, A. (2018). Gatewatching and news curation: Journalism, social media, and the public
sphere. New York, NY: Peter Lang.
Capara, G. V., Barbaranelli, C., & Zimbardo, P. G. (1999). Personality profiles and political
parties. Political Psychology, 20, 175–197. doi:10.1111/0162-895X.00141
Cho, D., & Acquisti, A. (2013, June). The more social cues, the less trolling? An empirical study
of online commenting behavior. Paper presented at the Twelfth Workshop on the Economics
of Information Security, Washington, DC. Retrieved from https://www.econinfosec.org/
Hilbe, J. M. (2014). Modeling count data. New York, NY: Cambridge University Press.
Hylton, W. S. (2017). Down the Breitbart hole. The New York Times Magazine. Retrieved from
https://www.nytimes.com/2017/08/16/magazine/breitbart-alt-right-steve-bannon.html.
Knobloch-Westerwick, S. (2015). Choice and preference in media use: Advances in selective
exposure theory and research. New York, NY: Routledge.
Oz, M., Zheng, P., & Chen, G. (2018). Twitter versus Facebook: Comparing incivil-
ity, impoliteness, and deliberative attributes. New Media & Society, 20, 3400–3419.
doi:10.1177/1461444817749516
Rojas, H. (2010). “Corrective” actions in the public sphere: How perceptions of media and
media effects shape political behaviors. International Journal of Public Opinion Research,
Weeks, B. E., & Holbert, R. L. (2013). Predicting dissemination of news content on social
media: A focus on reception, friending, and partisanship. Journalism and Mass Communi-
cation Quarterly, 90, 212–232. doi:10.1177/1077699013482906
Weinberg, L. (2019). Fascism, populism, and American democracy. New York, NY: Routledge.
Westfall, J., Van Boven, L., Chambers, J. R., & Judd, C. M. (2015). Perceiving politi-