- Original article
- Open access
- Published:
An initial accuracy focus reduces the effect of prior exposure on perceived accuracy of news headlines
Cognitive Research: Principles and Implications volume 5, Article number: 55 (2020)
Abstract
The illusory truth effect occurs when the repetition of a claim increases its perceived truth. Previous studies have demonstrated the illusory truth effect with true and false news headlines. The present study examined the effects that different ratings made during initial exposure have on the illusory truth effect with news headlines. In two experiments, participants (total N = 575) rated a set of news headlines in one of two conditions. Some participants rated how interesting they were, and others rated how truthful they were. Participants later rated the perceived accuracy of a larger set of headlines that included previously rated and new headlines. In both experiments, prior exposure increased perceived accuracy for participants who made initial interest ratings, but not for participants who made initial truthfulness ratings. The increase in perceived accuracy that accompanies repeated exposure was attenuated when participants considered the accuracy of the headlines at initial exposure. Experiment 2 also found evidence for a political bias: participants rated politically concordant headlines as more accurate than politically discordant headlines. The magnitude of this bias was related to performance on a cognitive reflection test; more analytic participants demonstrated greater political bias. These results highlight challenges that fake news presents and suggest that initially encoding headlines’ perceived truth can serve to combat the illusion that a familiar headline is a truthful one.
Statement of significance
One problem posed by fake news is that it misleads people, causing them to believe false information. Decades of research have shown that repeating information to people increases their perceived accuracy for that information. These findings have recently been extended to included news headlines. If people see news headlines repeatedly, they rate them as more accurate than they do headlines they have seen only once. This occurs for real and fake news headlines. The present study examined whether the type of ratings made during initial exposure can augment the propensity to evaluate previously viewed stimuli as truthful. The stimuli for the present study were news headlines and participants rated these under two different paradigms. Some participants rated how interesting the headlines were and other participants rated how truthful they were. All participants later rated the accuracy of a larger set of headlines that included previously rated and new headlines. In both experiments, headlines that were initially rated were judged as more accurate than new headlines for participants who made interest ratings, but not for participants who made truthfulness ratings. There was also evidence for a political bias. Participants rated politically concordant headlines as more accurate than political discordant headlines, and this bias increased with cognitive reflection. These results highlight challenges that fake news presents and suggest possible interventions for these challenges. Specifically, people should consider the truthfulness of news headlines they see to avoid the increase in perceived accuracy that would otherwise occur. Finally, people should be instructed about partisan biases in evaluating news headlines in an attempt to reduce these biases.
“Repetition does not transform a lie into truth”—U.S. President Franklin Delano Roosevelt
Repeating false information does not make it true. Decades of research, however, have demonstrated that repeating false claims increases their perceived accuracy. Hasher et al. (1977) first reported that repeating both true and false general knowledge statements increased their perceived accuracy. This finding has since been termed the illusory truth effect (also referred to as the repetition-induced truth effect, truth effect, effect of prior exposure, and effect of repeated exposure). Most explanations for the illusory truth effect claim that repeating a statement increases the fluency with which it is processed, and this processing fluency is then misattributed to the feeling of truth for the statement (e.g., Reber and Unkelbach 2010). Indeed, repetition activates the perirhinal cortex, a region of the brain associated with fluency (Wang et al. 2016). Other explanations for illusory truth posit that frequency of occurrence may be a cue to the validity of information, recognizing information and feelings of familiarity may increase belief in the information, or that repeated information creates a set of coherent references in memory, which in turn leads to greater perceived accuracy (Unkelbach et al. 2019). A meta-analysis has demonstrated the robustness of this effect (Dechêne et al. 2010). In addition, warnings sometimes reduce but do not eliminate the illusory truth effect (Nadarevic and Aßflag 2017), and knowledge of statements’ veracity does not eliminate the effect (Fazio et al. 2015). The effect has also been replicated in the context of subjective sociopolitical statements (Arkes et al. 1989) and consumer opinions (Johar and Roggeveen 2007), and it can be detected weeks (Bacon 1979; Garcia-Marques et al. 2015) and even months later (Schwartz 1982). Furthermore, the illusory truth effect occurs for implausible statements (Fazio et al. 2019), and individual differences in cognitive ability and cognitive style do not moderate this effect (De keersmaecker et al. 2020). Thus, the illusory truth effect appears to be robust across individuals, material domains, and time.
The illusory truth effect has important implications. It highlights the difficulty in understanding truth in a modern world that is rich with information that varies in its depiction and representation of the truth. With the existence of 24-h news channels, for example, false news stories may be repeated several times in a short span, which, according to the illusory truth effect, should increase their perceived accuracy among viewers. Similarly, on social media, politicians may repeat false assertions, which should increase their perceived accuracy among their followers. Fake news stories also spread rapidly on social media (Vosoughi et al. 2018), highlighting the crucial need for interventions to combat such proliferation of misinformation.
Two recent studies examined the effects of repeated exposure of news headlines on their perceived accuracy. Pennycook et al. (2018) had participants rate their willingness to share 12 news headlines (6 true and 6 false) on social media, complete filler tasks, and then rate the familiarity and accuracy of 24 news headlines (12 that had been previously rated and 12 new headlines). Pennycook et al. (2018) found that perceived accuracy was greater for those headlines previously rated than for new headlines, and this effect was similar for true and false headlines. Smelter and Calvillo (2020) had participants rate the humor of 24 headlines (12 true and 12 false), complete filler tasks, and then rate the accuracy of 48 headlines (24 old and 24 new). They replicated the effect of prior exposure on perceived accuracy from Pennycook et al. (2018). These studies demonstrated that the illusory truth effect occurs for news headlines, which was explained by repeated exposure increasing processing fluency. An important implication of these studies is that spreading fake news on social media increases its perceived accuracy.
The primary goal of the present study was to examine whether the type of ratings made during initial exposure affects the magnitude of the repeated exposure effect. Pennycook et al. (2018) and Smelter and Calvillo (2020) had different initial ratings and found different sized effects of prior exposure. Specifically, the effect of repeated exposure was larger with Smelter and Calvillo’s (2020) humor ratings that it was with Pennycook et al.’s (2018) willingness to share ratings. The particular initial ratings that participants make appear to influence the magnitude of the illusory truth effect with headlines. We speculate that when participants judge their willingness to share a headline, they may rely on some of the same cues that they would to judge truthfulness, whereas for other unrelated judgments like humor, they may utilize different cues. Willingness to share a headline is also related to its perceived accuracy (Altay et al. 2020).
In a recent study, Brashier et al. (2020) included two different initial rating tasks. They had some participants rate how interesting a set of statements were, whereas other participants rated how truthful they were. Participants then saw a larger set of statements that included the original statements and a new set of statements, and they rated their perceived accuracy for this set. Brashier et al. (2020) found the typical illusory truth effect when the initial ratings were for interest, but this effect did not occur when the initial ratings were about truthfulness. Their later experiments showed that initial truthfulness ratings only eliminated the illusory truth effect when participants had knowledge of the truth of the statements during the initial rating. Brashier et al. (2020) concluded that the illusory truth effect disappears when participants think about statements’ truth at initial exposure, particularly when they have the knowledge to recognize that false statements are not true. The increase in fluency associated with repeated exposure does not lead to increased perceived accuracy when individuals focus on the accuracy of information at initial encoding.
In the present study, we extended Brashier et al.’s (2020) method to news headlines. We examined whether asking participants to consider the truthfulness of news headlines at initial exposure reduces the effects of prior exposure on perceived accuracy reported by Pennycook et al. (2018) and Smelter and Calvillo (2020). If our prediction is supported, this finding would suggest that a strategy to reduce the impact of the spread of fake news on social media would be to encourage people to think about the truthfulness of news stories that they see. These findings can also inform theoretical explanations of the illusory truth effect.
Preregistration and ethics information
Before data collection for each experiment, we preregistered our hypotheses, data collection plans, inclusion criteria, and planned analyses on the Open Science Framework (OSF). We note our exploratory analyses that were not preregistered in the Results sections of each experiment. Furthermore, the materials and data from both experiments are available on the OSF (https://osf.io/8xvdy/). The experiments described in this manuscript were approved by an Institutional Review Board prior to data collection and all participants consented to their participation and to their de-identified data being posted on the OSF.
Experiment 1
In Experiment 1, participants initially rated headlines in one of two conditions. Some participants rated how interesting the headlines were, whereas other participants rated how truthful they were. All participants then rated the accuracy of a larger set of headlines that included the previously rated and new headlines. The primary hypothesis was the interaction between initial rating task and initial exposure. Specifically, we predicted that when the initial rating task was interest, there would be an effect of initial exposure, such that repeated headlines would result in greater perceived accuracy than would new headlines; but when the initial rating task was truthfulness, there would not be an effect of initial exposure. In other words, the illusory truth effect would be present for participants who made initial interest ratings; but absent for participants who made initial truthfulness ratings. We also predicted that there would be a main effect of prior exposure (repeated headlines would result in greater perceived accuracy than new headlines) and a main effect of headline truth (true headlines would result in greater perceived accuracy than false headlines). We did not predict any other interactions.
Methods
Power analysis
To determine our sample size, we conducted a power analysis. The two previous studies that examined the effects of prior exposure on perceived accuracy of news headlines found effect sizes of ηp2 = .09 and ηp2 = .20 (Pennycook et al. 2018; Smelter and Calvillo 2020, respectively). Using G*Power 3 (Faul et al. 2007), we calculated that we needed 82 participants per condition to have a power of 0.80 to detect the smaller of these two effect sizes. Thus, we aimed for total of 164 participants (with 82 in each condition). After data collection, we realized that our power analysis was based on the ability to detect an illusory truth effect in a specific group, rather than to detect an interaction between groups. Therefore, this study may have been underpowered to test our primary hypothesis.
Participants
We preregistered two inclusion criteria (described in “Materials and procedure” section). A total of 212 Mechanical Turk workers completed this experiment, and 172 met both inclusion criteria. Of these 172 participants, 83 identified as female and 89 identified as male. Participants ranged in age from 19 to 78, with a median of 36 years, and all participants claimed that they resided in the USA.
Design
The design of Experiment 1 was a 2 (initial rating task: interest, truthfulness) × 2 (prior exposure: repeated, new) × 2 (headline truth: true, false) mixed-model factorial. Initial rating task was manipulated between-subjects and initial exposure and headline were manipulated within-subjects. Eighty-five participants were in the interest rating condition and 87 were in the truthfulness rating condition.
Materials and procedure
The materials consisted of 32 news headlines, 16 true and 16 false. The true headlines were taken from the website USNews.com, whereas the false headlines were taken from the fact-checking website Snopes.com. The headlines were edited to be in the same font and all accompanying pictures were edited to be the same size. We included pictures with headlines because most fake news studies have included pictures (e.g., Pennycook et al. 2018), although the inclusion of these pictures has been shown to increase perceived accuracy of both true and false headlines (Smelter and Calvillo 2020). All false headlines had received a false rating from Snopes. All headlines appeared on their respective websites in July, August, and September of 2019. The headlines were a mixture of political and nonpolitical headlines, and the political headlines contained some that were pro-liberal and some that were pro-conservative. We selected false headlines with the intent of capturing a representative set of fake news that existed at the time. To select true headlines that were somewhat implausible, we used the Offbeat section of US News for many of them. We also included some true political headlines (from US News) so that there were some true and some false political headlines. Figure 1 contains examples of true and false headlines that were pro-conservative, pro-liberal, and nonpolitical. The entire set of headlines is available on the OSF page for this study. Headlines differ from typical materials used in illusory truth studies. Headlines’ truth may be easier to judge based on knowledge than typically used general knowledge statements, but previous studies have shown the illusory truth effect with headlines (Pennycook et al. 2018; Smelter and Calvillo 2020).
Participants were randomly assigned to either the interest or truthfulness rating condition. Participants then rated 16 headlines, 8 true and 8 false. We counterbalanced which 16 headlines from the larger set of 32 they rated. Participants made their initial ratings on a 6-point scale, either from very uninteresting to very interesting or from definitely false to definitely true. This was the same initial rating scale used by Brashier et al. (2020). Immediately after initial ratings, participants rated the accuracy of all 32 headlines (for 16 they had made previous ratings and the other 16 were new) on a 4-point scale from not at all accurate to very accurate. We used a different scale for final ratings for two reasons. First, this is the same scale commonly used in fake news studies (e.g., Pennycook et al. 2018) and using the same scale facilitates comparison across studies. Second, we used different initial and final rating scales to prevent participants from remembering and duplicating their initial responses. The lack of delay between initial and final ratings is similar to the procedure of Brashier et al. (2020). After completing the final accuracy ratings, participants answered some demographic questions (age, gender) and two honesty questions. Participants were asked if they had responded randomly or without reading any questions in the study and if they had looked up any headlines online. Participants who responded yes to either question were omitted from analysis (n = 40). Finally, participants were debriefed and paid for their participation. We conducted this experiment with TurkPrime (Litman et al. 2016).
Results and discussion
We conducted a three-way mixed-model ANOVA with initial rating task, prior exposure, and headline truth as independent variables and perceived accuracy as the dependent variable. Table 1 contains the mean perceived accuracy for each condition. We found a main effect of prior exposure, F(1, 170) = 8.39, p = .004, ηp2 = .05. Repeated headlines (M = 2.57, 95% CI [2.50, 2.63]) resulted in greater perceived accuracy than new headlines (M = 2.48, 95% CI [2.40, 2.54]). We also found a significant main effect of headline truth, F(1, 170) = 163.33, p < 0.001, ηp2 = .49. True headlines (M = 2.77, 95% CI [2.71, 2.83]) resulted in greater perceived accuracy than false headlines (M = 2.27, 95% CI [2.19, 2.35]). The specific type of initial ratings did not significantly affect final perceived accuracy, F(1, 170) = 0.13, p = .718, ηp2 = .00.
The primary hypothesis was that there would be an interaction between prior exposure and initial ratings. Specifically, we predicted that repeated headlines would be perceived as more accurate than new headlines when the initial rating was about interest, but not when it was about truthfulness. This interaction was not statistically significant, F(1, 170) = 3.00, p = .085, ηp2 = .02. Because this interaction was nearly significant, we conducted simple effects tests to examine the specific simple effects that we predicted. These analyses were exploratory and not preregistered. Figure 2 displays the relevant means. For participants who made initial interest ratings, their subsequent perceived accuracy was greater for repeated headlines than for new headlines, t(84) = 3.01, p = .003, d = 0.33. This was not the case for those who made initial truthfulness ratings, t(86) = 0.91, p = .367, d = 0.10. Repeated headlines and new headlines had similar perceived accuracy. No other interactions were significant.
Experiment 2
The primary purpose of Experiment 1 was to examine the interaction between initial ratings and prior exposure. The interaction was not significant, but simple effects tests were consistent with our hypothesis. However, Experiment 1 may have been underpowered to detect this interaction. The power analysis was based on the number of participants needed in a condition to detect an effect of prior exposure. There were no relevant data available for the interaction. The primary purpose of Experiment 2 was to again test the interaction between initial ratings and prior exposure, using the interaction effect size of Experiment 1 to sufficiently power a test of this interaction. The secondary purpose of Experiment 2 was to examine if participants’ political ideology would affect their ratings of perceived truth for news headlines. Previous investigations have found that people perceive politically concordant headlines as more accurate than politically discordant headlines (Pennycook et al. 2018; Pennycook and Rand 2019). A similar finding has been referred to as a political bias (Faragó et al. 2019).
In Experiment 2, we used a set of all political headlines and we included an equal number of pro-liberal and pro-conservative headlines to examine political bias. We also included a measure of cognitive reflection in Experiment 2. Fake news studies have found that cognitive reflection predicts better discernment of true and false headlines (Bronstein et al. 2019; Pennycook and Rand 2019, 2020). We attempted to replicate this finding in Experiment 2 and to examine how cognitive reflection relates to political bias. This political bias shares some similarity with other phenomena, such as myside bias, belief bias, and motivated reasoning. Myside bias occurs when individuals’ prior attitudes and opinions bias how they evaluate and generate evidence (Stanovich et al. 2013), belief bias occurs when participants accept more believable conclusions than unbelievable conclusions, independent of the conclusions’ validity (Evans et al. 1983), and motivated reasoning occurs when participants’ preferences affect their evaluation of evidence or decisions (Kunda 1990). In each of these, participants’ prior knowledge or attitudes bias their performance. We believe something similar occurs with political bias in headline judgments: participants perceive ideologically consistent headlines as more accurate than ideologically inconsistent headlines. According to the extant literature on each domain, cognitive ability (often measured by cognitive reflection) has distinctive relationships with these different phenomena: cognitive ability is unrelated to myside bias (e.g., Stanovich and West 2007, 2008), it is negatively related to belief bias (e.g., Toplak et al. 2011), and it is positively related to motivated reasoning (e.g., Kahan 2013); participants with greater cognitive ability engage in more motivated reasoning. Because of the similarity between political bias, myside bias, belief bias, and motivated reasoning, and the associations of the latter three with cognitive ability, we examined the relationship between cognitive reflection and political bias.
In Experiment 2, our primary prediction was the interaction between rating and prior exposure. Specifically, we predicted that with initial interest ratings, prior exposure would increase subsequent perceived accuracy, but with initial truthfulness ratings, prior exposure would not affect subsequent perceived accuracy. We also predicted a main effect of prior exposure (repeated headlines would result in greater perceived accuracy than new headlines) and of headline truth on perceived accuracy (true headlines would result in greater perceived accuracy than false headlines). We also expected to find evidence for political bias: perceived accuracy would be greater for politically concordant headlines than for politically discordant headlines. Finally, we predicted that cognitive reflection performance would predict news discernment, and we examined how cognitive reflection performance related to political bias.
Methods
Power analysis
We conducted a power analysis to determine our sample size. We used the interaction effect size from Experiment 1 (ηp2 = .02). Using G*Power 3 (Faul et al. 2007), we found that we needed 387 participants to have a power of 0.80 to detect this interaction. Thus, we aimed for total of 388 participants (with 194 in each condition).
Participants
We preregistered the same two inclusion criteria as in Experiment 1. A total of 413 Mechanical Turk workers completed this experiment, and 403 met both inclusion criteria. Of these 403 participants, 233 identified as female, 166 identified as male, 3 identified as another gender, and 1 declined to respond to the gender question. Additionally, 189 identified as Democrats, 113 identified as Republicans, and 101 identified as neither. Participants ranged in age from 19 to 77, with a median of 38 years, and all participants claimed that they resided in the USA.
Design
The design of Experiment 2 was a 2 (initial rating task: interest, truthfulness) × 2 (prior exposure: previously rated, new) × 2 (headline truth: true, false) mixed-model factorial. Initial rating task was manipulated between-subjects and initial exposure, and headline truth was manipulated within-subjects. One hundred ninety-nine participants were in the interest rating condition and 204 were in the truthfulness rating condition.
Materials and procedure
The materials included 32 news headlines, 16 true and 16 false. Again, the true headlines were taken from the website USNews.com, whereas the false headlines were taken from the fact-checking website Snopes.com. All false headlines had received a false rating from Snopes. All headlines appeared on their respective websites between November 2018 and September 2019, and all headlines were edited to have the same font and picture size. Unlike in Experiment 1, the headlines in Experiment 2 were all political and contained an equal number of pro-liberal and pro-conservative true and false headlines. The entire set of headlines is available on the OSF page for this study and in Additional file 1. Experiment 2 also included a cognitive reflection test (CRT). We selected seven CRT items that had provided good variability with Mechanical Turk workers in our previous studies that came from four sources (Baron et al. 2015; Oldrati et al. 2016; Primi et al. 2016; Thomson and Oppenheimer 2016). The specific CRT items are included on the OSF page for this study.
The procedure was similar to that of Experiment 1. Participants were randomly assigned to either the interest or truthfulness rating conditions and then rated 16 headlines (8 true and 8 false) on the same 6-point scales as Experiment 1. We counterbalanced which headlines received initial ratings. After these initial ratings, participants answered some demographic questions (age, gender, political party), some political ideology questions, and then completed the 7-item CRT. Participants then rated the accuracy of all 32 headlines (16 previously rated and 16 new) on the same 4-point scale as Experiment 1. After completing the final accuracy ratings, participants answered the same two honesty questions as those in Experiment 1. Ten participants failed at least one honesty check question. Finally, participants were debriefed and paid for their participation. We conducted this experiment with TurkPrime (Litman et al. 2016).
Results and discussion
To test our main hypotheses, we conducted a three-way mixed-model ANOVA with initial rating task, prior exposure, and headline truth as independent variables and perceived accuracy as the dependent variable. Table 2 contains the mean perceived accuracy for each condition. We found a main effect of prior exposure, F(1, 401) = 31.29, p < 0.001, ηp2 = .07. Repeated headlines (M = 2.54, 95% CI [2.50. 2.57]) resulted in greater perceived accuracy than new headlines (M = 2.43, 95% CI [2.39. 2.47]). We also found a significant main effect of headline truth, F(1, 401) = 819.99, p < 0.001, ηp2 = .67. True headlines (M = 2.79, 95% CI [2.76, 2.83]) resulted in greater perceived accuracy than false headlines (M = 2.18, 95% CI [2.14, 2.21]). The specific type of initial ratings did not have a significant main effect on final perceived accuracy, F(1, 401) = 1.32, p = .251, ηp2 = .00.
Our primary hypothesis was that there would be an interaction between prior exposure and initial rating. Specifically, we predicted that repeated headlines would be rated as more accurate than new headlines when the initial rating was about interest, but not when it was about truthfulness. This interaction was significant, F(1, 401) = 12.98, p < 0.001, ηp2 = .03. Figure 2 displays the relevant means. We conducted simple effects tests to examine the specific simple effects that we predicted. As predicted, for participants who made initial interest ratings, subsequent perceived accuracy was greater for repeated headlines than for new headlines, t(198) = 5.36, p < .001, d = 0.38. This was not the case for those who made initial truthfulness ratings, t(203) = 1.90, p = .059, d = 0.13. For those participants, repeated headlines and new headlines resulted in similar perceived accuracy (Fig. 3).
The two-way interactions between initial rating and headline truth and between prior exposure and headline truth were not significant; F(1, 401) = 3.05, p = .081, ηp2 = .01; F(1, 401) = 0.02, p = .885, ηp2 = .00, respectively. The three-way interaction between initial rating, prior exposure, and headline truth was significant, F(1, 401) = 3.96, p = .047, ηp2 = .01. To explore this interaction, we examined the effects of initial rating and prior exposure separately for true and false headlines. This interaction was unexpected, and these simple effects tests were not preregistered. With true headlines, the two-way interaction between initial rating and prior exposure was not significant, F(1, 401) = 2.88, p = .091, ηp2 = .01, whereas this interaction was significant with false headlines, F(1, 401) = 18.22, p < 0.001, ηp2 = .04.
We also examined how the concordance of the headlines to participants’ ideology affected accuracy ratings by adding whether headlines were concordant or discordant. For participants who identified as Republicans, we coded pro-conservative headlines as concordant and pro-liberal headlines as discordant, and we did the opposite for participants who identified as Democrats. The participants who identified as neither Republican nor Democrat are excluded from these analyses. These analyses deviated from our preregistered plan. The factor of headline concordance was added to the ANOVA that we analyzed in the previous paragraphs in a four-way ANOVA. Participants demonstrated a political bias. They perceived politically concordant headlines (M = 2.77, 95% CI [2.72, 2.83]) as more accurate than discordant headlines (M = 2.21, 95% CI [2.17, 2.27]), F(1, 300) = 260.16, p < 0.001, ηp2 = .46. Concordance interacted with headline truth, F(1, 300) = 15.53, p < 0.001, ηp2 =.05. This interaction appears to have occurred because the effect of political concordance was greater with false headlines (concordant M = 2.50, 95% CI [2.44, 2.56]; discordant: M = 1.88, 95% CI [1.82, 1.94]) than it was with true headlines (concordant M = 3.05, 95% CI [2.99, 3.11]; discordant: M = 2.56, 95% CI [2.51, 2.62]). Concordance did not interact with any other variables in two-way interactions, three-way interactions, or the four-way interaction.
We then examined CRT performance. Overall, participants correctly answered a mean of 3.35 (out of 7) CRT items correctly (Cronbach’s α = 0.61; participants responded with the intuitive answer to 2.17 items, on average). CRT performance was positively correlated with news discernment, r(401) = .26, p < 0.001. Next, we explored how CRT performance related to political bias. We preregistered this analysis as exploratory. To calculate political bias, we subtracted the perceived accuracy of politically discordant headlines from that of concordant headlines. CRT performance was positively correlated with political bias, r(300) = .12, p = .036. Thus, more analytic participants showed greater political bias. To further explore this relationship, we examined the relationship between CRT performance and true and false news that was politically concordant and discordant. CRT performance was positively correlated with ratings of politically concordant true news, r(300) = .16, p = .005, negatively correlated with ratings of politically discordant fake news, r(300) = − 0.20, p < 0.001, and not significantly correlated with politically concordant fake news, r(300) = − 0.07, p = .252, or with politically discordant true news, r(300) = .01, p = .864. These results show that more analytic participants show greater political bias because they are more likely to perceive concordant true news as accurate and less likely to perceive discordant fake news as accurate. In Additional file 1, we also report mean perceived accuracy for each headline based on whether participants were Democrats, Republicans, or neither, and the correlations between CRT performance and perceived accuracy for each group. These analyses were not preregistered.
General discussion
The present study examined the illusory truth effect with news headlines. Replicating previous studies (Pennycook et al. 2018; Smelter and Calvillo 2020), we found that prior exposure to fake news increased perceived accuracy. Our primary goal was to extend the findings of Brashier et al. (2020) to evaluations of news headlines. Brashier et al. (2020) found that repeated exposure had the typical effect when initial ratings were about participants’ interest, but this effect was diminished when the initial ratings were about truthfulness. We found the same pattern in the present study. The predicted interaction failed to reach significance in Experiment 1, which was likely underpowered, but the simple effects tests were consistent with predictions. We increased power in Experiment 2 and found the same pattern and a significant interaction between prior exposure and initial rating. These results are consistent with meta-analytic findings that the illusory truth effect is smaller in studies that included initial ratings of truthfulness (Dechêne et al. 2010). It is important to note that the effects of repeated exposure on perceived accuracy were in the small to medium range (d = 0.33 and d = 0.38 in Experiments 1 and 2, respectively; Cohen 1992) for participants who made initial interest ratings. These effects were similar in size to those previously reported (Pennycook et al. 2018; Smelter and Calvillo 2020), and suggest that repeated exposure to headlines modestly increases perceived accuracy.
Our secondary goal was to examine political bias in judgments of headlines’ accuracy. In Experiment 2, we found evidence for political bias in headline evaluations. Politically concordant headlines resulted in greater perceived accuracy than political discordant headlines. These results replicate those from previous studies (Pennycook et al. 2018; Pennycook and Rand 2019). We also replicated previous reports that CRT performance predicted news discernment (Bronstein et al. 2019; Pennycook and Rand 2019, 2020). Although we did not predict it, we found that CRT performance was positively correlated with political bias. More analytic participants demonstrated greater political bias. This relationship resulted from more analytic participants perceiving political concordant true headlines as more accurate and politically discordant headlines as less accurate. Interestingly, greater cognitive reflection performance is related to both better news discernment and judgments that are more biased towards’ participants political ideology. More research is needed to better understand the relationship between CRT performance and political bias.
Brashier and Marsh (2020) reviewed the literature on how people judge truth. They identified three cues that influence truth judgments: base rates, memories, and feelings. These cues can explain how participants judge the accuracy of news headlines. Because most headlines that people have encountered have been true, the base rate truth of headlines should bias participants to believe that a headline is true. Cues from memories and feelings can then update beliefs about the truth of a headline. Therefore, according to this conceptualization, participants ought to believe that headlines are accurate if they match the content in their memory, and they should disbelieve headlines that contradict their memory contents. For example, if participants have read news stories from trusted sources, they may be more likely to believe subsequently related headlines that they encounter. Finally, participants take cues from the feelings elicited by the headlines. The effects reported in the present study concern participants’ feelings. Repeated exposure increases processing fluency, and the feeling of fluency serves as a cue for truth. Initial truth ratings, however, can allow participants to discount fluency. We believe that political bias also arises from participants’ feelings. Specifically, participants experience more positive feelings with politically concordant and more negative feelings with politically discordant headlines. Collectively then, based on the model described by Brashier and Marsh (2020), we believe that participants start with a base rate biased toward rating headlines as accurate and then update them based on a memory search for relevant information and the feelings that accompany the headlines.
The results of the present study have important implications for the effects of exposure to fake news. In order to reduce the illusory truth effect that occurs when news headlines are repeatedly observed, people should think about the truth of each headline they encounter. Even though the interestingness of online information is something that affects its likelihood of being shared, as evidenced by the fact that content is more likely to spread on Twitter if it is viewed as interesting (Bakshy et al. 2011), the present study nonetheless demonstrates that there are substantial risks of evaluating news and internet content in such a way. Additionally, people should consider their ideological biases. Reducing bias among ideologues may reduce extremism, and reducing extremism has been identified as one of psychological science’s most imperative goals (Lilienfeld et al. 2009). Confirmation bias can be reduced with brief interventions, and this reduction can last at least 2 months (Morewedge et al. 2015). Future research should address the efficacy of debiasing political bias in the context of news accuracy judgments.
It is encouraging that individuals’ perceived accuracy was greater for true headlines than it was for false headlines, replicating previous studies (e.g., Pennycook and Rand 2019; Smelter and Calvillo 2020). Nonetheless, it does seem that people struggle to stay vigilant when consuming information (e.g., Pennycook and Rand 2019) and that, through this laziness, individuals may leave themselves vulnerable to be influenced by factors other than accuracy—like interestingness. For example, Pennycook et al. (2019) suggested that people often do not intentionally spread misinformation, but instead may be influenced by factors other than truthfulness when deciding what to share. Recent investigations speak to this suggestion by showing that asking people to consider the accuracy of information reduces the sharing of fake news (Fazio 2020; Pennycook et al. 2019, 2020). Thus, our investigation presents another simple and scalable benefit of prompting the critical evaluation of news content.
Conclusion
In two preregistered studies, we found that the effects of prior exposure to news headlines on perceived accuracy can be reduced if participants consider headlines’ truth at the initial exposure. We also found a political bias in news headline evaluations, such that participants rated politically concordant headlines as more accurate than political discordant headlines. This political bias was larger among participants with greater cognitive reflection. These findings highlight some of the challenges for combating the effects of fake news and suggest some possible interventions for these effects. Further, the need for effective interventions is pressing given that fake news has been shown to influence individuals’ attitudes toward real-world issues, including candidates for political office (Bovet and Makse 2019), public policy issues (Bastos and Mercea 2019), and health-related information (Iacobucci 2019). The findings of the present study suggest that assessing news headlines’ interest can increase susceptibility of false information. The task of wading through an increasingly information-dense world may often feel daunting, especially given the pervading influence of the illusory truth effect and the abundance of misinformation. However, we show that initially evaluating news headlines for accuracy can help to combat the illusion that a familiar headline is a truthful one.
Availability of data and materials
The materials used and datasets generated during the current study are available on the Open Science Framework at https://osf.io/8xvdy/.
References
Altay, S., de Araujo, E., & Mercier, H. (2020). “If this account is true, it is most enormously wonderful”: Interestingness-if-true and the sharing of true and false news. PsyArXiv. https://doi.org/10.31234/osf.io/tdfh5
Arkes, H. R., Hackett, C., & Boehm, L. (1989). The generality of the relation between familiarity and judged validity. Journal of Behavioral Decision Making, 2(2), 81–94. https://doi.org/10.1002/bdm.3960020203
Bacon, F. T. (1979). Credibility of repeated statements: Memory for trivia. Journal of Experimental Psychology: Human Learning and Memory, 5(3), 241–252. https://doi.org/10.1037/0278-7393.5.3.241
Bakshy, E., Hofman, J. M., Mason, W. A., & Watts, D. J. (2011). Everyone’s an influencer: Quantifying influence on Twitter. In I. King (Chair), Proceedings of the 4th ACM international conference on web search and data mining (pp. 65–74), Hong Kong, China. https://doi.org/https://doi.org/10.1145/1935826.1935845
Baron, J., Scott, S., Fincher, K., & Metz, S. E. (2015). Why does the Cognitive Reflection Test (sometimes) predict utilitarian moral judgment (and other things)? Journal of Applied Research in Memory and Cognition, 4(3), 265–284. https://doi.org/10.1016/j.jarmac.2014.09.003
Bastos, M. T., & Mercea, D. (2019). The Brexit botnet and user-generated hyperpartisan news. Social Science Computer Review, 37(1), 38–54. https://doi.org/10.1177/0894439317734157
Bovet, A., & Makse, H. A. (2019). Influence of fake news in Twitter during the 2016 US presidential election. Nature Communications, 10(7). https://doi.org/10.1038/s41467-018-07761-2
Brashier, N. M., Eliseev, E. D., & Marsh, E. J. (2020). An initial accuracy focus prevents illusory truth. Cognition, 194, 104054. https://doi.org/10.1016/j.cognition.2019.104054
Brashier, N. M., & Marsh, E. J. (2020). Judging truth. Annual Review of Psychology, 71, 499–515. https://doi.org/10.1146/annurev-psych-010419-050807
Bronstein, M. V., Pennycook, G., Bear, A., Rand, D. G., & Cannon, T. D. (2019). Belief in fake news is associated with delusionality, dogmatism, religious fundamentalism, and reduced analytic thinking. Journal of Applied Research in Memory and Cognition, 8(1), 108–117. https://doi.org/10.1016/j.jarmac.2018.09.005
Cohen, J. (1992). A power primer. Psychological Bulletin, 112(1), 155–159. https://doi.org/10.1037/0033-2909.112.1.155
De keersmaecker, J., Dunning, D., Pennycook, G., Rand, D. G., Sanchez, C., Unkelbach, C., & Roets, A. (2020). Investigating the robustness of the illusory truth effect across individual differences in cognitive ability, need for cognitive closure, and cognitive style. Personality and Social Psychology Bulletin, 46(2), 204–215. https://doi.org/10.1177/0146167219853844
Dechêne, A., Stahl, C., Hansen, J., & Wänke, M. (2010). The truth about the truth: A meta-analytic review of the truth effect. Personality and Social Psychology Review, 14(2), 238–257. https://doi.org/10.1177/1088868309352251
Evans, J. S. B., Barston, J. L., & Pollard, P. (1983). On the conflict between logic and belief in syllogistic reasoning. Memory & Cognition, 11(3), 295–306. https://doi.org/10.3758/BF03196976
Faragó, L., Kende, A., & Krekó, P. (2019). We only believe in news that we doctored ourselves: The connect between partisanship and political fake news. Social Psychology, 51(2), 77–90. https://doi.org/10.1027/1864-9335/a000391
Faul, F., Erdfelder, E., Lang, A. G., & Buchner, A. (2007). G* Power 3: A flexible statistical power analysis program for the social, behavioral, and biomedical sciences. Behavior Research Methods, 39(2), 175–191. https://doi.org/10.3758/BF03193146
Fazio, L. (2020). Pausing to consider why a headline is true or false can help reduce the sharing of false news. Harvard Kennedy School Misinformation Review. https://doi.org/10.37016/mr-2020-009
Fazio, L. K., Brashier, N. M., Payne, B. K., & Marsh, E. J. (2015). Knowledge does not protect against illusory truth. Journal of Experimental Psychology: General, 144(5), 993–1002. https://doi.org/10.1037/xge0000098
Fazio, L. K., Rand, D. G., & Pennycook, G. (2019). Repetition increases perceived truth equally for plausible and implausible statements. Psychonomic Bulletin & Review, 26, 1705–1710. https://doi.org/10.3758/s13423-019-01651-4
Garcia-Marques, T., Silva, R. R., Reber, R., & Unkelbach, C. (2015). Hearing a statement now and believing the opposite later. Journal of Experimental Social Psychology, 56, 126–129. https://doi.org/10.1016/j.jesp.2014.09.015
Hasher, L., Goldstein, D., & Toppino, T. (1977). Frequency and the conference of referential validity. Journal of Verbal Learning and Verbal Behavior, 16, 107–112. https://doi.org/10.1016/S0022-5371(77)80012-1
Iacobucci, G. (2019). Vaccination: “Fake news” on social media may be harming UK uptake, report warns. BMJ: British Medical Journal. https://doi.org/10.1136/bmj.l365b
Johar, G. V., & Roggeveen, A. L. (2007). Changing false beliefs from repeated advertising: The role of claim-refutation alignment. Journal of Consumer Psychology, 17(2), 118–127. https://doi.org/10.1016/S1057-7408(07)70018-9
Kahan, D. M. (2013). Ideology, motivated reasoning, and cognitive reflection. Judgment and Decision Making, 8, 407–424.
Kunda, Z. (1990). The case for motivated reasoning. Psychological Bulletin, 108(3), 480–498. https://doi.org/10.1037/0033-2909.108.3.480
Lilienfeld, S. O., Ammirati, R., & Landfield, K. (2009). Giving debiasing away: Can psychological research on correcting cognitive errors promote human welfare? Perspectives on Psychological Science, 4, 390–398. https://doi.org/10.1111/j.1745-6924.2009.01144.x
Litman, L., Robinson, J., & Abberbock, T. (2016). TurkPrime.com: A versatile crowdsourcing data acquisition platform for the behavioral sciences. Behavior Research Methods, 49, 433–442. https://doi.org/10.3758/s13428-016-0727-z
Morewedge, C. K., Yoon, H., Scopelliti, I., Symborski, C. W., Korris, J. H., & Kassam, K. S. (2015). Debiasing decisions: Improved decision making with a single training intervention. Policy Insights from the Behavioral and Brain Sciences, 2, 129–140. https://doi.org/10.1177/2372732215600886
Nadarevic, L., & Aßfalg, A. (2017). Unveiling the truth: Warnings reduce the repetition-based truth effect. Psychological Research Psychologische Forschung, 81(4), 814–826. https://doi.org/10.1007/s00426-016-0777-y
Oldrati, V., Patricelli, J., Colombo, B., & Antonietti, A. (2016). The role of dorsolateral prefrontal cortex in inhibition mechanism: A study on cognitive reflection test and similar tasks through neuromodulation. Neuropsychologia, 91, 499–508. https://doi.org/10.1016/j.neuropsychologia.2016.09.010
Pennycook, G., Cannon, T. D., & Rand, D. G. (2018). Prior exposure increases perceived accuracy of fake news. Journal of Experimental Psychology: General, 147, 1865–1880. https://doi.org/10.1037/xge0000465
Pennycook, G., Epstein, Z., Mosleh, M., Arechar, A., Eckles, D., & Rand, D. (2019). Understanding and reducing the spread of misinformation online. PsyArXiv. https://doi.org/10.31234/osf.io/3n9u8
Pennycook, G., McPhetres, J., Zhang, Y., Lu, J. G., & Rand, D. G. (2020). Fighting COVID-19 misinformation on social media: Experimental evidence for a scalable accuracy-nudge intervention. Psychological Science, 31(7), 770–780. https://doi.org/10.1177/0956797620939054
Pennycook, G., & Rand, D. G. (2019). Lazy not biased: Susceptibility to partisan fake news is better explained by lack of reasoning than by motivated reasoning. Cognition, 188, 39–59. https://doi.org/10.1016/j.cognition.2018.06.011
Pennycook, G., & Rand, D. G. (2020). Who falls for fake news? The roles of bullshit receptivity, overclaiming, familiarity, and analytic thinking. Journal of Personality, 88(2), 185–200. https://doi.org/10.1111/jopy.12476
Primi, C., Morsanyi, K., Chiesi, F., Donati, M. A., & Hamilton, J. (2016). The development and testing of a new version of the cognitive reflection test applying item response theory (IRT). Journal of Behavioral Decision Making, 29, 453–469. https://doi.org/10.1002/bdm.1883
Reber, R., & Unkelbach, C. (2010). The epistemic status of processing fluency as source for judgments of truth. Review of Philosophy and Psychology, 1, 563–581. https://doi.org/10.1007/s13164-010-0039-7
Schwartz, M. (1982). Repetition and rated truth value of statements. American Journal of Psychology, 95, 393–407. https://doi.org/10.2307/1422132
Smelter, T. J., & Calvillo, D. P. (2020). Pictures and repeated exposure increase perceived accuracy of news headlines. Applied Cognitive Psychology, 34(5), 1061–1071. https://doi.org/10.1002/acp.3684
Stanovich, K. E., & West, R. F. (2007). Natural myside bias is independent of cognitive ability. Thinking & Reasoning, 13(3), 225–247. https://doi.org/10.1080/13546780600780796
Stanovich, K. E., & West, R. F. (2008). On the relative independence of thinking biases and cognitive ability. Journal of Personality and Social Psychology, 94(4), 672–695. https://doi.org/10.1037/0022-3514.94.4.672
Stanovich, K. E., West, R. F., & Toplak, M. E. (2013). Myside bias, rational thinking, and intelligence. Current Directions in Psychological Science, 22(4), 259–264. https://doi.org/10.1177/0963721413480174
Thomson, K. S., & Oppenheimer, D. M. (2016). Investigating an alternate form of the cognitive reflection test. Judgment and Decision Making, 11, 99–113.
Toplak, M. E., West, R. F., & Stanovich, K. E. (2011). The Cognitive Reflection Test as a predictor of performance on heuristics-and-biases tasks. Memory & Cognition, 39(7), 1275–1289. https://doi.org/10.3758/s13421-011-0104-1
Unkelbach, C., Koch, A., Silva, R. R., & Garcia-Marques, T. (2019). Truth by repetition: Explanations and implications. Current Directions in Psychological Science, 28, 247–253. https://doi.org/10.1177/0963721419827854
Vosoughi, S., Roy, D., & Aral, S. (2018). The spread of true and false news online. Science, 359(6380), 1146–1151. https://doi.org/10.1126/science.aap9559
Wang, W. C., Brashier, N. M., Wing, E. A., Marsh, E. J., & Cabeza, R. (2016). On known unknowns: Fluency and the neural mechanisms of illusory truth. Journal of Cognitive Neuroscience, 28, 739–746. https://doi.org/10.1162/jocn_a_00923
Acknowledgements
None.
Funding
Publication fees were provided by a Research, Scholarship, and Creativity grant from the Office of Graduate Studies and Research at California State University San Marcos.
Author information
Authors and Affiliations
Contributions
Both authors contributed to the design of the two studies. DPC oversaw the online data collection and analyzed the data from both studies. Both authors read and revised the manuscript.
Corresponding author
Ethics declarations
Ethics approval and consent to participate
Ethics approval was obtained by the Institutional Review Board at California State University San Marcos prior to data collection. All participants provided informed consent to participate in the experiments described in the manuscript and to have their de-identified data posted in a data repository.
Consent for publication
Not applicable.
Competing interests
The authors declare that they have no competing interests.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary information
Additional file 1.
Exploratory analyses for Experiment 2.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Calvillo, D.P., Smelter, T.J. An initial accuracy focus reduces the effect of prior exposure on perceived accuracy of news headlines. Cogn. Research 5, 55 (2020). https://doi.org/10.1186/s41235-020-00257-y
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s41235-020-00257-y