- Original article
- Open Access
Limited not lazy: a quasi-experimental secondary analysis of evidence quality evaluations by those who hold implausible beliefs
Cognitive Research: Principles and Implications volume 5, Article number: 65 (2020)
Past research suggests that an uncritical or ‘lazy’ style of evaluating evidence may play a role in the development and maintenance of implausible beliefs. We examine this possibility by using a quasi-experimental design to compare how low- and high-quality evidence is evaluated by those who do and do not endorse implausible claims. Seven studies conducted during 2019–2020 provided the data for this analysis (N = 746). Each of the seven primary studies presented participants with high- and/or low-quality evidence and measured implausible claim endorsement and evaluations of evidence persuasiveness (via credibility, value, and/or weight). A linear mixed-effect model was used to predict persuasiveness from the interaction between implausible claim endorsement and evidence quality. Our results showed that endorsers were significantly more persuaded by the evidence than non-endorsers, but both groups were significantly more persuaded by high-quality than low-quality evidence. The interaction between endorsement and evidence quality was not significant. These results suggest that the formation and maintenance of implausible beliefs by endorsers may result from less critical evidence evaluations rather than a failure to analyse. This is consistent with a limited rather than a lazy approach and suggests that interventions to develop analytical skill may be useful for minimising the effects of implausible claims.
Information is more abundant and accessible than ever before. The constant stream of news contains true information, as well as errors, exaggeration, and lies. Consequently, some people come to believe highly implausible claims—for example, that the COVID-19 pandemic is a hoax. These beliefs can be costly for individuals and society, making it vital to understand who believes implausible claims and why. Research suggests that a ‘lazy’ uncritical style of evaluating evidence may be associated with the formation and maintenance of implausible beliefs. Our quasi-experimental study tests whether those who endorse implausible claims evaluate high-quality or low-quality evidence differently to those who do not. We argue that if those who believe implausible claims are generally ‘lazy’ uncritical thinkers, then they will find high- and low-quality evidence equally persuasive, while non-endorsers will not. Analysis of data from seven different studies shows that high-quality evidence was more persuasive overall than low-quality evidence for both endorsers and non-endorsers. However, endorsers were more persuaded by the presented evidence than non-endorsers were. These findings suggest that those who hold implausible beliefs are sensitive to evidence quality, but are more persuaded than those who do not hold implausible beliefs. Thus, implausible beliefs may result from limited evaluative skills, rather than a ‘lazy’ thinking style.
Information is more accessible now than ever before. The constant stream of material from news and social networks contains true information as well as errors, exaggeration, and lies. However, our capacity to process and evaluate the reliability of this information is limited and can lead to errors in thinking and judgment (Hills 2019). For example, some people come to believe highly implausible claims like conspiracy theories, fake news, and paranormal accounts. These beliefs can be costly for individuals and society (Frau-Meigs 2019; Lewandowsky et al. 2017). Indeed, we have seen that misplaced belief in fabricated, false, and implausible statements can lead to a range of undesirable behaviours like prejudice, rejection of moderate political views, disdain for scientific consensus, and a disregard for evidence-based medical advice (Allington et al. 2020; Douglas et al. 2019; Imhoff and Lamberty 2020; Zimmermann and Kohring 2020). Therefore, it is vital for us to better understand who believes implausible claims and why.
There is evidence that those who more strongly believe one implausible claim are also more likely to strongly believe other unsubstantiated claims (Bensley et al. 2020). For instance, those who endorse dubious health-related information, religious, paranormal, and conspiratorial beliefs are also more likely to be persuaded by pseudo-profound statements (a.k.a. ‘bullshit’; Pennycook et al. 2015a). Various types of implausible beliefs (e.g. magical thinking, pseudo-scientific claims, and belief in fake news) also tend to be positively correlated with each other (Barron et al. 2018; Lobato et al. 2014; Pennycook et al. 2015a; Pennycook and Rand 2019; Rizeq et al. 2020; Ståhl and van Prooijen 2018). The strength and ubiquity of these associations have led researchers to suspect that a common cognitive style may underpin many forms of implausible beliefs (Bronstein et al. 2019; Lobato et al. 2014; Rizeq et al. 2020; Ståhl and van Prooijen 2018).
Cognitive style and implausible beliefs
A cognitive style is an individual’s preferred approach for perceiving, processing and remembering information (Zhang and Sternberg 2006). Evidence suggests a reflexive (or ‘Type 1’ Evans and Stanovich 2013; Kahneman 2011; Ross et al. 2016), rather than a reflective (‘Type 2’), cognitive style is associated with the formation and maintenance of various implausible beliefs (Bronstein et al. 2019; Greene and Murphy, this issue; Pennycook et al. 2015a; Pennycook et al. 2015b; Pennycook and Rand 2020; Sindermann et al. 2020). A reflexively open-minded cognitive style describes a ‘lazy’ approach to decision-making, whereby a broad range of claims are uncritically accepted, irrespective of their epistemic value (Pennycook and Rand 2020). In contrast, a reflective cognitive style describes the tendency to more slowly analyse the information presented, question one’s intuition, and consider alternatives in decision-making (Pennycook et al. 2015b; Pennycook and Rand 2020; Zhang and Sternberg 2006).
Examining the relationship between cognitive style and implausible beliefs
Studies that have investigated the relationship between cognitive style and implausible beliefs have generally explored this via correlations between measures of cognitive style and implausible claim endorsement. A wide variety of measures of cognitive style have been used in this literature including the Cognitive Reflection Test (Frederick 2005; e.g. Greene and Murphy this issue; Pennycook and Rand 2019; Ståhl and van Prooijen 2018), the Actively Open-Minded Thinking Scale (Stanovich and West 1997; e.g. Bronstein et. al. 2019; Rizeq et al. 2020), the Need For Cognition Scale (Cacioppo et al. 1996; e.g. Barron et al. 2018; Ross et al. 2016) and the Rational/Experiential Multimodal Inventory (Norris and Epstein 2011; e.g. Barron et al. 2018). Implausible claim endorsement has been examined using the Bullshit Receptivity Scale (Pennycook et al. 2015a; e.g. Pennycook and Rand 2019, 2020), Belief in Conspiracy Theories Inventory (Swami et al. 2010; e.g. Barron et al. 2018), Core Knowledge Confusion scale (Lindeman and Aarnio 2007; e.g. Rizeq et al. 2020), and Paranormal Belief Scale (Drinkwater et al. 2017; e.g. Ståhl and van Prooijen 2018), among others.
Overwhelmingly, these correlational studies have shown an association between cognitive style and implausible beliefs. Specifically, people who more strongly endorse implausible claims typically have more intuitive, reflexive cognitive styles (Barron et al. 2018; Greene and Murphy this issue; Lobato et al. 2014; Mikušková 2018; Pennycook et al. 2015a; Pennycook and Rand 2019, 2020; Rizeq et al. 2020; Ståhl and van Prooijen 2018). Furthermore, indicators of reflective thinking (i.e. open-mindedness and analytical thinking) have also been found to mediate the relationship between delusion-proneness, dogmatism, and fake news endorsement (Bronstein et al. 2019). These associations suggest that implausible beliefs may arise from a failure to engage in a deliberative evaluation of relevant information—resulting in a failure to identify the weaknesses and implausibility of epistemically suspect claims. However, other possibilities may explain the association between cognitive style and implausible belief endorsement.
The Motivated System 2 Reasoning (MS2R) account is one alternative explanation, which suggests that deliberation may actually bias people to favour information that aligns with their ideology—irrespective of epistemic value (Pennycook and Rand 2020). That is, a reflective cognitive style might increase belief in implausible claims that are consistent with one's own perspective via effortful deliberation. Pennycook and Rand (2019) tested the MS2R account by examining the relationship between cognitive style and belief in ideologically in/consistent (i.e. partisan) real or fake news. However, they found that those with a more reflective analytical style were better at discerning between real and fake news—irrespective of ideological consistency. This result led to the view that people may endorse implausible claims because they are ‘lazy, not biased’ evidence evaluators (Pennycook and Rand 2019). This interpretation is also supported by the results of experimental studies.
In a series of experiments, Swami et al. (2014) found that interventions that create cognitive disfluency and slow down information processing significantly reduce the endorsement of conspiracy claims. Similarly, Bago et al. (2020) found that participants believe false headlines more when evaluating under time pressure and cognitive load than when given unlimited time to assess the claims. Taken together, this evidence suggests that promoting reflective analysis can improve evidence evaluations and reduce the endorsement of implausible claims. However, researchers have not yet examined whether those who endorse implausible claims actually analyse evidence more poorly, or differently, than those who do not.
Researchers have also not examined whether errors in evaluating brief pseudo-profound statements or news headlines (e.g. Bago et al. 2020; Bronstein et al. 2019; Pennycook et al. 2015a; Pennycook and Rand 2019, 2020) generalise to the evaluation of more realistic materials like news articles, interviews, blogs, or opinion pieces. The tasks used in the previous research generally contain little if any substantive content beyond a statement, a headline, or a few lines of text. For example, even extended and reflective consideration of the fake news headline ‘Trump on Revamping the Military: We’re Bringing Back the Draft’ (Pennycook and Rand 2020) does not easily reveal the objective truth of that claim. Indeed, materials like this contain few cues that can be relied upon to differentiate between the true and fake claims aside from plausibility.
Thus, participants in the previous research have been given limited scope to engage in a reflective analysis—even if they wanted to. This leaves open the possibility that something other than reflective analysis separates good from poor performance on these evidence evaluation tasks and suggests it is important to provide decision-makers with more sophisticated tests of their analytical ability (Ståhl and van Prooijen 2018). For example, by presenting rich sources of information that contain objective strengths and weaknesses relevant to the reliability of the claims. One source of this type of information is expert evidence presented in courts.
Evidence quality evaluation in forensic contexts
Lay jurors in civil and criminal trials are routinely presented with complex technical and scientific information by expert witnesses (Gross 1991; Hilbert 2019; Jurs 2015). It is their duty to determine the outcome of a case based on a rational assessment of the evidence presented to the court (Edmond 2015; Raeder 2003; Thayer 1890). Jurors are directed by the judge to evaluate the evidence and decide which claims are sufficiently credible for belief (e.g. Eleventh Circuit Pattern Jury Instructions, criminal 2020; Judicial Commission of New South Wales 2020; for discussions, see Brewer 1998; Edmond 2015; Ward 2017). Yet, as in other contexts, jurors sometimes make mistakes about information quality and veracity (McAuliff and Duckworth 2010; McAuliff et al. 2009). These mistakes can be highly consequential, resulting in innocent people being convicted (or held liable) and punished for offences they did not commit (Derwin 2018; Garrett 2017; Garrett and Neufeld 2009).
Scholars and authoritative scientific bodies have raised concerns about the quality of expert evidence for decades (Giannelli 1993; Hand 1901; Hilbert 2019; Mnookin 2007; National Research Council of the Academies of Science [NRC] 2009; President’s Council of Advisors on Science and Technology [PCAST] 2016). These concerns primarily relate to genuinely held opinions that are plausible, but ultimately incorrect or insufficiently reliable. For example, low-quality opinions are those that are given without sufficient evidence that the underpinning science is repeatable, reproducible, or accurate (PCAST 2016); that is expressed incorrectly or without appropriate qualification (NRC 2009), where the proficiency of the examiner has not been demonstrated (Garrett and Mitchell 2018; Martire and Edmond 2016) and where biasing contextual information has not been appropriately disclosed or managed (Dror 2016; NRC 2009). Conversely, high(er)-quality opinions are those based on foundationally valid methods and techniques, that are expressed using valid terminology, and that appropriately disclose assumptions and limitations (NRC 2009). These opinions are produced by practitioners with appropriate qualifications, demonstrated skill, and who have limited, declared, or removed potentially biasing influences (Edmond et al. 2016; Martire et al. 2020). The forensic context, therefore, provides a novel—yet realistic setting—for examining possible differences in evidence quality evaluations between those who do and do not endorse implausible claims.
The present study
In this paper, we conduct a quasi-experimental secondary analysis of data from seven studies to examine whether those who hold implausible beliefs evaluate objectively higher- or lower-quality forensic evidence differently to those who do not hold implausible beliefs. If, as past analysis suggests, those who endorse implausible claims have a ‘lazy’, reflexive cognitive style and do not engage in analysis of the evidence, we would expect endorsers to be equally persuaded by low- and high-quality evidence because their uncritical approach leads them to be insensitive to epistemic value (Pennycook and Rand 2019).
However, if those who hold implausible beliefs do engage in some—albeit imperfect—analysis, then we would anticipate some sensitivity to evidence quality whereby high-quality evidence is more persuasive than low-quality evidence. If endorsers complete this evaluation differently to non-endorsers—as we might anticipate given that one group is persuaded by highly improbable claims and the other is not—then we might also expect an interaction between evidence quality and endorsement status. This interaction could involve over belief of low-quality evidence and/or under belief of high-quality evidence by endorsers compared to non-endorsers.
Data and design
We report a secondary analysis of data collected from seven studies conducted by members of a forensic decision-making research group. Each of the seven primary studies was originally designed to examine the effects of various aspects of evidence quality on perceptions of evidence persuasiveness (i.e. credibility, value, and/or weight; see Table 1 for an overview). Although it was not the main aim of these studies, our research group was also interested to know whether people who believe implausible claims generally evaluate evidence differently to those who do not. To examine this phenomenon, we measured implausible claim endorsement in each study. It is this data that we analyse here using a 2 (evidence quality: high vs. low) × 2 (implausible claim endorsement: endorser vs. non-endorser) between-subjects quasi-experimental design.
Evidence quality was varied in this study by a priori selecting one relatively high-quality and one relatively low-quality evidence condition from the seven primary studies (see Table 1). One high- and one low-quality condition was selected for analysis from each primary study except Study 6, where all three conditions involved low-quality evidence. When combined, these 15 conditions produced an evidence quality manipulation that varied aspects of scientific rigour and transparency, methodological reliability, source trustworthiness, expert proficiency, and legal admissibility. The details of each manipulation are reported in the ‘Evidence Quality’ section below.
Implausible claim endorsement was determined by responses to implausible claims about vaccines, global warming, and a flat earth. ‘Endorsers’ were participants who rated one or more of the three claims greater than or equal to 75 on a scale from 0 ‘not at all’ to 100 ‘definitely true’. Non-endorsers were those who rated all three claims lower than 50. The dependent variables were ratings of evidence credibility, value, and weight (i.e. ‘persuasiveness’) from 0 to 100.
This design, including the data for in/exclusion, high-/low-quality conditions, non-/endorsement criteria, and analytic approach, was preregistered before formal or informal inspection of implausible claim items, computation of endorsement status, or examination of the effects of endorsement status and evidence quality on the dependent variables (AsPredicted #40589; https://aspredicted.org/3rv9g.pdf).
Of the original 1,747 eligible participants in 33 conditions from the seven primary studies, 873 participants in 15 conditions were selected for inclusion in the secondary analysis a priori. All participants were based in the USA, reported they were jury-eligible, completed the study online, and were recruited between June 2019 and May 2020. Participants from Studies 1–6 were recruited online through Amazon Mechanical Turk, had approval ratings > 95% for their past work, and were compensated up to US$10 per hour (n = 836). Participants from Study 7 were students recruited from a large south-western university in the USA who received course credit for their participation (n = 37). All participants completed a reCAPTCHA to ensure respondents were human (von Ahn et al. 2008). The combined sample contained 125 ‘endorsers’ (14.3%) and 621 ‘non-endorsers’ (71.7%). We excluded the 127 participants who did not fit our preregistered endorsement inclusion criteria (i.e. those who rated all three implausible claims between 50 and 75). After exclusions, data from 746 participants were retained for analysis. See Table 2 for demographic information. The majority of this sample identified as male (55.2%), and the mean age was 37.2 years (SD = 11.7; range = 18–74). The majority identified as White/Caucasian (76.3%), and 53.8% reported college/university as their highest level of education.
Evidence quality varied in different ways in each of the primary studies. In this section, we report the original research question for each primary study, a summary of the experimental manipulations, and a description of the high- and/or low-quality evidence included in this secondary analysis. Detailed descriptions of all manipulations, measures, and procedures for each primary study are also available at https://tinyurl.com/y4e75wo2.
Examined the effect of expert attractiveness and expert quality on perceptions of evidence persuasiveness (preregistered at https://tinyurl.com/y2h46ddy). Attractiveness (absent, high, low) was varied using images of two male experts, one rated as high in attractiveness and the other rated as low in attractiveness. Expert quality was varied by constructing a forensic gait expert who was either ‘strong’ or ‘weak’ on each of the eight attributes in the Expert Persuasion Expectancy (ExPEx) framework (i.e. foundation, field, speciality, ability, opinion, support, consistency, and trustworthiness; see Martire et al. 2020). The strong-ExPEx/attractiveness-absent condition served as high-quality evidence for the secondary analysis. Participants in this condition read about a validated technique, used by a practitioner with general and specifically relevant qualifications, who was unbiased and provided a strong opinion that other experts independently verified. The weak-ExPEx/attractiveness-absent condition served as low-quality evidence for the secondary analysis. Participants in this condition read about an invalid technique, used by a practitioner with irrelevant general and specialist qualifications, who was partisan and unsure about their opinion. Other experts also disagreed with the opinion presented.
Had the same primary aim and attractiveness manipulation as Study 1. However, in this study participants evaluated a lengthy (15-page) trial transcript adapted from the real testimony of an expert witness providing speech spectrography evidence (preregistered at https://tinyurl.com/y4glmued). Expert quality was again varied from ‘strong’ to ‘weak’ using the ExPEx framework. The strong-ExPEx/attractiveness-absent condition served as high-quality evidence for the secondary analysis. Participants in this condition read about a valid technique, used by a practitioner with relevant qualifications and extensive specialist training, who employed bias mitigation strategies, used a valid form of expression, and whose work was independently verified and agreed with by two other experts. The weak-ExPEx/attractiveness-absent condition served as low-quality evidence for the secondary analysis. Participants in this condition read about an unvalidated technique, used by a practitioner who trained in an irrelevant field, who had limited specialist training or experience, who was ignorant of and displayed bias, who provided invalid opinions, and whose work was not independently reviewed or verified by relevant experts.
Examined the impact of judicial admissibility decisions on evidence persuasiveness. Participants evaluated a brief description of a bicycle helmet product evaluation provided by an engineering professor (see Schweitzer and Saks 2009). There were four types of judicial admissibility decision: control, implicit-admit, explicit-admit, and explicit-exclude. Those in the control condition were given no legal context for their evaluations of the professors’ product evaluation. Those in the implicit-admit condition were told they were making their judgements in the context of a civil liability trial but were not given information about evidence admissibility. Those in the explicit-admit condition were told that the professors’ evidence was subject to a thorough judicial review and was admissible for their consideration (i.e. could be relied upon in their decision-making). This condition served as high-quality evidence for the secondary analysis. Participants in the explicit-exclude condition were told that after a thorough judicial review the evidence was not admitted (i.e. should not be relied upon in their decision-making). This condition served as low-quality evidence for the secondary analysis.
Examined the effects of expert ability and judicial admissibility decisions on evidence persuasiveness (preregistered at https://tinyurl.com/yxfbfs5e). There were three types of judicial admissibility decision in this study: control, explicit-admit, explicit-exclude. These conditions were operationalised the same way as in Study 3. The experimental materials also included information about ‘high’ or ‘low’ expert ability. In the high-ability conditions, participants were told that the engineering professor providing evidence had scored 90% accuracy on relevant proficiency tests. In the low-ability conditions, participants were told that the engineering professor providing evidence had scored 50% accuracy on relevant proficiency tests. The high-ability/explicit-admit condition served as high-quality evidence for the secondary analysis. The low-ability/explicit-exclude condition served as low-quality evidence in the secondary analysis.
Examined the effects of discipline reliability and level of disclosure on evidence persuasiveness (preregistered at https://tinyurl.com/yyjsvzad). Participants read a report either about a high-reliability (fingerprint analysis) or low-reliability forensic discipline (footwear analysis). The report provided either a detailed- or a sparse-disclosure of important information about the evidence and opinion. In the detailed disclosure conditions, the report was modelled on best-practice recommendations for expert reports submitted to police and courts (per Edmond et al. 2016). In the sparse-disclosure conditions, important information was omitted. The high-reliability/detailed-disclosure condition served as high-quality evidence in the secondary analysis. In this condition, participants read a detailed fingerprint analysis report stating that: studies show fingerprint experts have expertise but can still make errors; the error rates for the discipline could be as high as 1 in 306 or 1 in 18 and that no forensic method other than nuclear DNA had been shown to demonstrate a connection between evidence and an individual or source. The low-reliability/detailed-disclosure condition served as low-quality evidence in the secondary analysis. Participants in this condition read a detailed footwear analysis report, indicating that no studies have looked at error rates for footwear evidence, or examined whether footwear experts possess genuine expertise. They were also told that no appropriate black-box studies have supported the foundational validity of footwear analysis.
Examined how different reasoning measures predict evidence persuasiveness (preregistered at https://tinyurl.com/yyp2dm3m). Participants in this study read and evaluated the same detailed expert footwear comparison report before completing one of three different measures to assess their reasoning. The report contained three important flaws that undermined the quality of the evidence. Specifically, the report contained information that the expert performs with 45–55% accuracy on relevant proficiency tests, a fallacy in the reporting of the results (i.e. a prosecutors’ fallacy; Thompson and Schumann 1987) and limitations to the quality of the footwear impression images used in the analysis. In all three conditions, participants evaluated the same low-quality evidence—only the dependent measures differed by condition. As such, the data from this study add to the data for low-quality evidence in our analyses and only speak to evidence-quality differences when combined with the data from the other six primary studies.
Examined the effects of analysis method and method disclosure on evidence persuasiveness (preregistered at https://tinyurl.com/yyp2dm3m). Participants in this study read an opinion from a DNA analyst stating either the ‘biased’ (race-specific) or ‘unbiased’ (race-neutral) assumptions associated with the analytic method. Analyses completed using race-specific rather than race-neutral DNA databases are often conducted to produce more conservative random match probability estimates that inflate the likelihood that the defendant was the source of DNA associated with a crime (Oldt and Kanthaswamy 2020). Participants were also either given an additional statement explicitly disclosing the method used (race-specific or race-neutral database) or were provided with no explicit information about the method. The unbiased-method/disclosure-present condition served as high-quality evidence for the secondary analysis. Participants in this condition read a statement from the DNA analyst that the probability of observing the match between the suspect and crime scene samples was 100 million times greater than the probability of observing the same match ‘assuming that someone else, regardless of race, was the contributor’. They were then also told that this estimate was calculated ‘from a database that includes DNA frequency data from individuals of all races’. The biased-method/disclosure-absent condition served as low-quality evidence for the secondary analysis. Participants in this condition read a statement from the DNA analyst ‘assuming that someone else of the same race was the contributor’. These participants were not explicitly informed that the analysis was completed using a race-specific database.
Participants in Studies 1–5 and 7 answered three questions about the specific type of evidence they were presented using on-screen sliders: (1) How credible was the expert? From 0 ‘not at all’ to 100 ‘definitely credible’; (2) How valuable was the evidence? From 0 ‘not at all’ to 100 ‘definitely valuable’; (3) How much weight do you give to the evidence? From 0 ‘none at all’ to 100 ‘the most possible’. Participants in Study 6 only answered question three.
Implausible claim endorsement
To minimise social desirability in responding, the three implausible claims were randomly interspersed throughout an 11-item general knowledge battery. Participants rated general knowledge statements (e.g. Sharks are mammals and A kilogram is heavier than a gram) from 0 ‘not at all’ to 100 ‘definitely true’. Two of the three implausible claims included in the battery were based on items used in past research: Vaccines are harmful, and this fact is covered up (Jolley and Douglas 2014), and Global warming is a hoax (van der Linden 2015). The third item was new: The earth is flat. Implausible claim ‘endorsers’ demonstrated a high degree of belief in an implausible claim by rating at least one of these three items ≥ 75 out of 100 for truth. ‘Non-endorsers’ rated all three items lower than 50, indicating they regarded all the implausible claims more false than true. Data from participants that rated these items between 50 and 75 were excluded from the analysis. Ratings were provided using an on-screen slider which had to be moved to progress in the study.
After providing consent, participants were presented the evidence materials containing relevant quality information for their study and condition. They then answered study-specific questions about their perceptions of the evidence and completed the evidence persuasiveness measures (i.e. credibility, value, and/or weight). The general knowledge battery containing the implausible claims was presented after all study-specific dependent measures and before the demographic questions (except in studies 2 and 5, where it followed the demographic questions). Finally, all participants were debriefed and thanked for their participation.
Our analysis plan was preregistered. The R lmer (v. 1.1-25; Bates et al. 2015) and lmerTest (v. 3.1-3; Kuznetsova et al. 2017) packages were used to construct a linear mixed-effects model predicting ‘persuasiveness’ (i.e. credibility, value, and weight) from the interaction between evidence quality (low or high) and endorsement status (endorser or non-endorser). A random effect was included for each participant nested in each study. This allowed participant ratings of persuasiveness to vary between studies or participants within each study. The lme.dscore function from the EMAtools package (v. 0.3.1; Kleiman 2017) was used to calculate effect sizes for the fixed effects in the model.
Implausible claim endorsement
The global warming claim was rated 75 or higher (i.e. endorsed) by 85 participants (11.4%), the vaccine claim was endorsed by 44 participants (5.9%), and the flat earth claim was endorsed by 19 participants (2.5%). See “Appendix A” for the distribution of responses for each implausible claim by endorsement status. Most participants (83.9%) rated no implausible claims over 75, 13.1% endorsed one claim, 2.1% endorsed two claims, and 0.8% endorsed all three implausible claims.
Overall, participants were significantly more persuaded by high-quality (M = 80.3, SD = 20.4) than low-quality evidence (M = 48.5, SD = 32.2; b = 32.59, SE = 2.42, t(673.53) = 13.45, p < 0.001, 95% CI [27.87, 37.35], Cohen’s d = 1.04; see Fig. 1). Endorsers were also significantly more persuaded by the presented evidence (M = 67.3, SD = 30.9) than non-endorsers (M = 61.4, SD = 31.9; b = 10.21, SE = 3.06, t(775.86) = 3.33, p < 0.001, 95% CI [4.21, 16.21], d = 0.24). The interaction between endorsement and evidence quality was not significant (b = − 9.22, SE = 5.31, t(700.75) = 1.74, p = 0.083, 95% CI [− 19.61, 1.21], d = 0.13), but it is important to note that this result does not constitute evidence against such an interaction. Endorsers’ ratings of low-quality (M = 56.3, SD = 33.4) and high-quality evidence (M = 82.2, SD = 19.0) did not significantly differ from non-endorsers ratings of low-quality (M = 47.0, SD = 31.7) and high-quality evidence (M = 79.9, SD = 20.6). See “Appendix B” for figures showing persuasiveness by evidence quality and endorsement status within each study. See “Appendix C” for post hoc analyses using all eligible participants from the primary studies (N = 1,747) and different definitions of non-/endorsement status.
In this study, we examined whether people who endorse implausible claims evaluate high- or low-quality evidence differently to people who do not. We found both similarities and differences in how endorsers and non-endorsers assigned credibility, value, and weight to forensic evidence. Compared to non-endorsers, endorsers were more persuaded by the evidence they were presented. However, both endorsers and non-endorsers were more persuaded by high-quality than low-quality evidence. These results are inconsistent with predictions based on previous correlational research and suggest new avenues for interventions to reduce the harms associated with implausible claim endorsement.
In terms of similarities, we found that high-quality evidence was valued more than low-quality evidence, irrespective of whether or not a person held a strong belief that vaccines are harmful, the earth is flat, or that global warming is a hoax. That is, compared to non-endorsers, endorsers did not significantly differ in their sensitivity to our manipulations of expert characteristics such as legal relevance, trustworthiness, proficiency, methodological rigour, reliability, and transparency. Although past research suggests such evaluations may be far from optimal (McAuliff and Duckworth 2010; McAuliff et al., 2009), the observed similarity between endorsers and non-endorsers is not what we expected based on previous research.
Past studies have shown that people who more strongly endorse implausible claims typically have a more intuitive, reflexive cognitive style (Barron et al. 2018; Lobato et al. 2014; Mikušková 2018; Pennycook et al. 2015a; Pennycook and Rand 2020; Rizeq et al. 2020; Ståhl and van Prooijen 2018). As a result, researchers have inferred that people endorse implausible claims because they are lazy and ‘fail to think’ (Pennycook and Rand 2019, p. 47). This led us to predict that if people who endorse implausible claims do not analyse, then they would be equally persuaded by high-quality and low-quality evidence. However, that is not what we found.
Our results suggest that endorsers and non-endorsers both completed some form of reflective analysis when given the opportunity to evaluate claims with a diverse array of strengths and weaknesses. This result is consistent with Greene and Murphy’s finding (this issue) that levels of analytical reasoning did not significantly predict ability to discriminate between true and fabricated stories. Both of these results are inconsistent with a generalised failure to think. Thus, it may be a mistake to infer that the more intuitive, reflexive cognitive style of endorsers shows that they are lazy and do not analyse (Pennycook and Rand 2019). Instead, performance on our more realistic test of analytical performance shows that endorsers may be less reflective or have limited analytical skills compared to non-endorsers. This interpretation is further supported by the observed differences in persuasiveness ratings between those who endorse implausible claims and those who do not.
Overall, endorsers were more persuaded by the presented evidence than non-endorsers. This general overvaluing could be because endorsers were relatively more optimistic about the strengths of evidence, and/or less pessimistic about the weaknesses of the evidence—although the former appears more likely given our data. Either way, the result suggests that endorsers differ from non-endorsers in their perceptions of what is or should be persuasive. Consequently, we may need to consider different strategies for reducing implausible belief formation and maintenance than those typically described in the literature.
Researchers examining implausible beliefs and cognitive style have tended to advocate for interventions that will shift people towards a more deliberative, reflective analytical strategy, for example, by ‘slowing down for a moment’ (Ward and Garety 2017, see also Bronstein et al. 2019; Greene and Murphy, this issue; Pennycook and Rand 2019). These suggestions are supported by experimental studies showing that implausible beliefs are reduced by additional deliberation time and information processing resources (Bago et al. 2020; Swami et al. 2014). Yet, it is unclear how much encouragement to deliberate would have changed the responses of endorsers in our sample. Instead, the generalised overvaluing of evidence suggests that endorsers may need help to appreciate the impact of various strengths and weaknesses on evidence quality. Thus, interventions focused on building analytical competence—for instance through education about research methods or threats to validity (McAuliff et al. 2009)—may be a promising avenue for further research.
It is important to be aware of some limitations when considering our results. First, we did not explicitly measure the cognitive style of our participants using, for example, the CRT or the AOT. As a result, we do not know whether endorsers in our sample had a more or less reflective analytical style than non-endorsers. We can only say that endorsers engaged in a reflective form of evidence evaluation that resulted in high-quality evidence being rated as more persuasive than low-quality evidence. Future research could measure both analytical performance and cognitive style to examine whether aspects of cognitive style can help to explain the differences between endorsers and non-endorsers that we observed.
It is also important to acknowledge that we used an ad hoc approach for assessing beliefs in implausible claims. We included three implausible claims in a general knowledge test battery and classified those who strongly believed any one of the claims as ‘endorsers’, and those who regarded all of them as more false than true as ‘non-endorsers’. This approach may have resulted in over- or under-inclusive definitions, which in turn could affect our results. However, the distribution of endorsement ratings suggests it is unlikely that the composition of endorsement groups would substantially change if we used more or less conservative definitions (see “Appendix A”). We also conducted post hoc analyses to examine the possible effects of different definitions on our results and found that both endorsers and non-endorsers were sensitive to evidence quality irrespective of the composition of non-/endorser groups or the evidence quality manipulations (see “Appendix C”). Nevertheless, it is important for future studies to replicate our findings using data collected primarily for that purpose.
Overall, our study suggests that it is not laziness that separates those who believe implausible claims from those who do not. Instead, limited analytical skills may play a role in the development and maintenance of a range of implausible beliefs. These limitations could be addressed through interventions targeting evaluative performance. However, further research examining the relative contributions of cognitive style and analytical skill is vital for developing the most effective interventions to minimise the harms caused by implausible beliefs.
Availability of data and materials
The datasets generated and/or analysed during the current study are available in the Open Science Framework https://tinyurl.com/y4e75wo2.
Actively open-minded thinking
Cognitive reflection test
Motivated system 2 reasoning
National Research Council
President’s Council of Advisors on Science and Technology
World Health Organization
Allington, D., Duffy, B., Wessely, S., Dhavan, N., & Rubin, J. (2020). Health-protective behaviour, social media usage and conspiracy belief during the COVID-19 public health emergency. Psychological Medicine. https://doi.org/10.1017/S003329172000224X.
Bago, B., Rand, D. G., & Pennycook, G. (2020). Fake news, fast and slow: Deliberation reduces belief in false (but not true) news headlines. Journal of Experimental Psychology: General. https://doi.org/10.1037/xge0000729.
Barron, D., Furnham, A., Weis, L., Morgan, K. D., Towell, T., & Swami, V. (2018). The relationship between schizotypal facets and conspiracist beliefs via cognitive processes. Psychiatry Research, 259, 15–20. https://doi.org/10.1016/j.psychres.2017.10.001.
Bates, D., Machler, M., Bolker, B., & Walker, S. (2015). Fitting linear mixed-effects models using lme4. Journal of Statistical Software, 67(1), 1–48. https://doi.org/10.18637/jss.v067.i01.
Bensley, D. A., Lilienfeld, S. O., Rowan, K. A., Masciocchi, C. M., & Grain, F. (2020). The generality of belief in unsubstantiated claims. Applied Cognitive Psychology, 34, 16–28. https://doi.org/10.1002/acp.3581.
Brewer, S. (1998). Scientific expert testimony and intellectual due process. The Yale Law Journal, 107(6), 1535–1681. https://doi.org/10.2307/797336.
Bronstein, M. V., Pennycook, G., Bear, A., Rand, D. G., & Cannon, T. D. (2019). Belief in fake news is associated with delusionality, dogmatism, religious fundamentalism, and reduced analytic thinking. Journal of Applied Research in Memory and Cognition, 8(1), 108–117. https://doi.org/10.1016/j.jarmac.2018.09.005.
Cacioppo, J. T., Petty, R. E., Feinstein, J. A., & Jarvis, W. B. G. (1996). Dispositional differences in cognitive motivation: the life and times of individuals varying in need for cognition. Psychological Bulletin, 119(2), 197. https://doi.org/10.1037/0033-2909.119.2.197.
Derwin, A. C. C. (2018). The judicial admission of faulty scientific expert evidence informing wrongful convictions. Western Journal of Legal Studies, 8(2), 1–19.
Douglas, K. M., Uscinski, J. E., Sutton, R. M., Cichocka, A., Nefes, T., Ang, C. S., & Deravi, F. (2019). Understanding conspiracy theories. Political Psychology, 40, 3–35. https://doi.org/10.1111/pops.12568.
Drinkwater, K., Denovan, A., Dagnall, N., & Parker, A. (2017). An assessment of the dimensionality and factorial structure of the revised paranormal belief scale. Frontiers in Psychology, 8, 1693. https://doi.org/10.3389/fpsyg.2017.01693.
Dror, I. E. (2016). A hierarchy of expert performance. Journal of Applied Research in Memory and Cognition, 5(2), 121–127. https://doi.org/10.1016/j.jarmac.2016.03.001.
Edmond, G. (2015). Forensic science evidence and the conditions for rational (jury) evaluation. Melbourne University Law Review, 39(1), 77–127.
Edmond, G., Found, B., Martire, K., Ballantyne, K., Hamer, D., Searston, R., et al. (2016). Model forensic science. Australian Journal of Forensic Sciences, 48(5), 496–537. https://doi.org/10.1080/00450618.2015.1128969.
Eleventh circuit pattern jury instructions, criminal. (2020). Atlanta, GA.
Evans, J. S. B., & Stanovich, K. E. (2013). Dual-process theories of higher cognition: Advancing the debate. Perspectives on Psychological Science, 8(3), 223–241. https://doi.org/10.1177/1745691612460685.
Frau-Meigs, D. (2019). Societal costs of “fake news” in the Digital Single Market. European Parliament.
Frederick, S. (2005). Cognitive reflection and decision making. Journal of Economic Perspectives, 19(4), 25–42. https://doi.org/10.1257/089533005775196732.
Garrett, B. L. (2017). Actual innocence and wrongful convictions. Academy for Justice, A Report on Scholarship and Criminal Justice Reform (Erik Luna ed., 2017 Forthcoming).
Garrett, B. L., & Mitchell, G. (2018). The proficiency of experts. University of Pennsylvania Law Review, 166(4), 901–960. https://doi.org/10.1002/bsl.2402.
Garrett, B. L., & Neufeld, P. J. (2009). Invalid forensic science testimony and wrongful convictions. Virginia Law Review, 95(1), 1–97.
Giannelli, P. C. (1993). Junk science: The criminal cases. The Journal of Criminal Law and Criminology, 84(1), 105.
Greene, C. M., & Murphy, G. (this issue). Individual differences in susceptibility to false memories for COVID-19 fake news. Cognitive Research: Principles and Implications
Gross, S. R. (1991). Expert evidence. Wisconsin Law Review, 1113–1232. https://repository.law.umich.edu/articles/196
Hand, L. (1901). Historical and practical considerations regarding expert testimony. Harvard Law Review, 15(1), 40–58. https://doi.org/10.2307/1322532.
Hilbert, J. (2019). The disappointing history of science in the courtroom: Frye, Daubert, and the ongoing crisis of junk science in criminal trials. Oklahoma Law Review, 71(3), 759–822.
Hills, T. T. (2019). The dark side of information proliferation. Perspectives on Psychological Science, 14(3), 323–330. https://doi.org/10.1177/1745691618803647.
Imhoff, R., & Lamberty, P. (2020). A bioweapon or a hoax? The link between distinct conspiracy beliefs about the Coronavirus disease (COVID-19) outbreak and pandemic behavior. Social Psychological and Personality Science. https://doi.org/10.31234/osf.io/ye3ma.
Jolley, D., & Douglas, K. M. (2014). The effects of anti-vaccine conspiracy theories on vaccination intentions. PLoS ONE, 9(2), 1–9. https://doi.org/10.1371/journal.pone.0089177.
Judicial Commission of New South Wales, issuing body. (2020). Criminal trial courts bench book Retrieved October 13, 2020, from http://nla.gov.au/nla.obj-467012383.
Jurs, A. W. (2015). Expert prevalence, persuasion, and price: What trial participants really think about experts? Indiana Law Journal, 91, 353–391.
Kahneman, D. (2011). Thinking, fast and slow. New York: Macmillan.
Kleiman, E. (2017). EMAtools: Data management tools for real-time monitoring/ecological momentary assessment data. R package version 0.1. 3.
Kuznetsova, A., Brockhoff, P. B., & Christensen, R. H. B. (2017). lmerTest Package: Tests in linear mixed effects models. Journal of Statistical Software, 82(13), 1–26.
Lewandowsky, S., Ecker, U. K., & Cook, J. (2017). Beyond misinformation: Understanding and coping with the “post-truth” era. Journal of Applied Research in Memory and Cognition, 6(4), 353–369.
Lindeman, M., & Aarnio, K. (2007). Superstitious, magical, and paranormal beliefs: An integrative model. Journal of Research in Personality, 41(4), 731–744.
Lobato, E., Mendoza, J., Sims, V., & Chin, M. (2014). Examining the relationship between conspiracy theories, paranormal beliefs, and pseudoscience acceptance among a university population. Applied Cognitive Psychology, 28, 617–625. https://doi.org/10.1002/acp.3042.
Martire, K. A., & Edmond, G. (2016). Rethinking expert opinion evidence. Melbourne University Law Review, 40, 967.
Martire, K. A., Edmond, G., & Navarro, D. (2020). Exploring juror evaluations of expert opinions using the Expert Persuasion Expectancy framework. Legal and Criminological Psychology. https://doi.org/10.1111/lcrp.12165.
McAuliff, B. D., & Duckworth, T. D. (2010). I spy with my little eye: Jurors’ detection of internal validity threats in expert evidence. Law and Human Behavior, 34(6), 489–500. https://doi.org/10.1007/s10979-010-9219-3.
McAuliff, B. D., Kovera, M. B., & Nunez, G. (2009). Can jurors recognize missing control groups, confounds, and experimenter bias in psychological science? Law and Human Behavior, 33(3), 247–257. https://doi.org/10.1007/s10979-008-9133-0.
Mikušková, E. B. (2018). Conspiracy beliefs of future teachers. Current Psychology, 37(3), 692–701. https://doi.org/10.1007/s12144-017-9561-4.
Mnookin, J. L. (2007). Expert evidence, partisanship, and epistemic competence. Brooklyn Law Review, 73, 1009–1033.
National Research Council of the Academies of Science. (2009). Strengthening Forensic Science in the United States: A path forward. Washington, DC: The National Academies Press. https://doi.org/10.1016/0379-0738(86)90074-5.
Norris, P., & Epstein, S. (2011). An experiential thinking style: its facets and relations with objective and subjective criterion measures. Journal of Personality, 79(5), 1043–1080. https://doi.org/10.1111/j.1467-6494.2011.00718.x.
Oldt, R. F., & Kanthaswamy, S. (2020). Expanded CODIS STR allele frequencies—Evidence for the irrelevance of race-based DNA databases. Legal Medicine, 42, 101642. https://doi.org/10.1016/j.legalmed.2019.101642.
Pennycook, G., Cheyne, J. A., Barr, N., Koehler, D. J., & Fugelsang, J. A. (2015). On the reception and detection of pseudo-profound bullshit. Judgment and Decision Making, 10(6), 549–563.
Pennycook, G., Fugelsang, J. A., & Koehler, D. J. (2015). Everyday consequences of analytic thinking. Current Directions in Psychological Science, 24(6), 425–432. https://doi.org/10.2139/ssrn.2644392.
Pennycook, G., & Rand, D. G. (2019). Lazy, not biased: Susceptibility to partisan fake news is better explained by lack of reasoning than by motivated reasoning. Cognition, 188, 39–50. https://doi.org/10.1016/j.cognition.2018.06.011.
Pennycook, G., & Rand, D. G. (2020). Who falls for fake news? The roles of bullshit receptivity, overclaiming, familiarity, and analytic thinking. Journal of Personality, 88(2), 185–200. https://doi.org/10.1111/jopy.12476.
President's Council of Advisors on Science and Technology. (2016). Forensic science in criminal courts: ensuring scientific validity of feature-comparison methods. United States: Executive Office of the President's Council of Advisors on Science and Technology.
Raeder, M. (2003). What does innocence have to do with it: Commentary on wrongful convictions and rationality? Law Review of Michigan State University Detroit College of Law, 2003(4), 1315–1336. https://doi.org/10.1017/9781316417119.010.
Rizeq, J., Flora, D. B., & Toplak, M. E. (2020). An examination of the underlying dimensional structure of three domains of contaminated mindware: Paranormal beliefs, conspiracy beliefs, and anti-science attitudes. Thinking & Reasoning. https://doi.org/10.1080/13546783.2020.1759688.
Ross, R. M., Pennycook, G., McKay, R., Gervais, W. M., Langdon, R., & Coltheart, M. (2016). Analytic cognitive style, not delusional ideation, predicts data gathering in a large beads task study. Cognitive Neuropsychiatry, 21(4), 300–314. https://doi.org/10.1080/13546805.2016.1192025.
Schweitzer, N. J., & Saks, M. J. (2009). The gatekeeper effect: The impact of judges’ admissibility decisions on the persuasiveness of expert testimony. Psychology, Public Policy, and Law, 15(1), 1–18. https://doi.org/10.1037/a0015290.
Sindermann, C., Cooper, A., & Montag, C. (2020). A short review on susceptibility to falling for fake political news. Current Opinion in Psychology, 36, 44–48. https://doi.org/10.1016/j.copsyc.2020.03.014.
Ståhl, T., & Van Prooijen, J. W. (2018). Epistemic rationality: Skepticism toward unfounded beliefs requires sufficient cognitive ability and motivation to be rational. Personality and Individual Differences, 122, 155–163. https://doi.org/10.1016/j.paid.2017.10.026.
Stanovich, K. E., & West, R. F. (1997). Reasoning independently of prior belief and individual differences in actively open-minded thinking. Journal of Educational Psychology, 89(2), 342–357. https://doi.org/10.1037/0022-0622.214.171.1242.
Swami, V., Chamorro-Premuzic, T., & Furnham, A. (2010). Unanswered questions: A preliminary investigation of personality and individual difference predictors of 9/11 conspiracist beliefs. Applied Cognitive Psychology, 24, 749–761. https://doi.org/10.1002/acp.1583.
Swami, V., Voracek, M., Stieger, S., Tran, U. S., & Furnham, A. (2014). Analytic thinking reduces belief in conspiracy theories. Cognition, 133(3), 572–585. https://doi.org/10.1016/j.cognition.2014.08.006.
Thayer, J. (1890). “Law and fact” in jury trials. Harvard Law Review, 4(4), 147–175. https://doi.org/10.2307/1321285.
Thompson, W. C., & Schumann, E. L. (1987). Interpretation of statistical evidence in criminal trials. Law and Human Behavior, 11, 167–187. https://doi.org/10.1007/BF01044641.
van der Linden, S. (2015). The conspiracy-effect: Exposure to conspiracy theories (about global warming) decreases pro-social behavior and science acceptance. Personality and Individual Differences, 87, 171–173. https://doi.org/10.1016/j.paid.2015.07.045.
von Ahn, L., Maurer, B., McMillen, C., Abraham, D., & Blum, M. (2008). ReCAPTCHA: Human-based character recognition via web security measures. Science, 321(5895), 1465–1468.
Ward, T. (2017). Expert testimony, law and epistemic authority. Journal of Applied Philosophy, 34(2), 263–277. https://doi.org/10.1111/japp.12213.
Ward, T., & Garety, P. A. (2017). Fast and slow thinking in distressing delusions: A review of the literature and implications for targeted therapy. Schizophrenia Research, 203, 80–87. https://doi.org/10.1016/j.schres.2017.08.045.
Zhang, L.-F., & Sternberg, R. J. (2006). The nature of intellectual styles. New Jersey: Lawrence Erlbaum Associates Publishers.
Zimmermann, F., & Kohring, M. (2020). Mistrust, disinforming news, and vote choice: A panel survey on the origins and consequences of believing disinformation in the 2017 German Parliamentary Election. Political Communication, 37(2), 215–237. https://doi.org/10.1080/10584609.2019.1686095.
Thank you to Tess Neal, Sreetharan Kanthaswamy, and Robert Oldt for their assistance and input into the design and implementation of Study 7. Thank you also to Dr. Jon Berengut, Professor Bobby Spellman, and Professor Jeff Zacks for their thoughtful comments and suggestions.
KAM was supported by funding from the Australian Research Council Linkage Project LP160100008. BG was supported by funding from the National Science Foundation under Grant No. 1823741.
Ethics approval and consent to participate
All data were collected in accordance with ethical guidelines. Studies 1–6 were approved by the UNSW Human Research Ethics Approval Panel C—Behavioural Sciences: Study 1 and 2: #2912; Study 3 and 4: #3232; Study 5: #3123; Study 6: #3233. Study 7 was approved by the Arizona State University Institutional Review Board: STUDY00011342.
Consent for publication
The authors declare that they have no competing interests.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
About this article
Cite this article
Martire, K.A., Growns, B., Bali, A.S. et al. Limited not lazy: a quasi-experimental secondary analysis of evidence quality evaluations by those who hold implausible beliefs. Cogn. Research 5, 65 (2020). https://doi.org/10.1186/s41235-020-00264-z
- Conspiracy theories
- Fake news
- Implausible beliefs
- Evidence evaluation
- Cognitive reflection test
- Forensic evidence
- Analytical thinking