Game-based inoculation versus graphic-based inoculation to combat misinformation: a randomized controlled trial
Cognitive Research: Principles and Implications volume 8, Article number: 49 (2023)
Misinformation affects various aspects of people’s lives, such as politics, entertainment, and social interactions. However, effective intervention measures to combat misinformation are lacking. The inoculation theory has become a prevalent measure of misinformation. This study employed inoculation theory and developed an interactive game to help the public counter misinformation. In this game, players take on the role of the misinformation spreader, intending to add more followers to their virtual accounts using different strategies. A total of 180 Chinese participants were randomly assigned to game-based inoculation, graphic-based inoculation, and control groups. The results indicated that both types of inoculation interventions significantly decreased the perceived credibility and sharing intention of misinformation. Game-based inoculation was more effective than graphic-based inoculation in terms of misinformation perceived credibility, and the intervention effects were stable after 2 weeks. The graphic-based inoculation contained the sleeper effect, which interventions required a period of time to take effect. Neither inoculation produced countereffects on perceived credibility and nor sharing intention of accurate information.
Misinformation has become a severe social problem, and helping the public deal with it is a significant focus of psychologists’ research. One effective approach has been to use psychological inoculation to combat misinformation. Still, the comparative effectiveness of different forms of inoculation and their impact on the perception of accurate information needs further investigation. In this study conducted in China, we compared the effects of game-based and graphic-based forms of inoculation and analyzed their impact on the perception of accurate information and misinformation. We found that both forms of inoculation effectively reduced perceived credibility and sharing intention of misinformation, and the effects remained stable for two weeks. The game-based inoculation was more effective in reducing perceived credibility of misinformation than graphic-based inoculation. Neither form of inoculation impacted the perceived credibility and sharing intention of accurate information. These findings demonstrate the effectiveness of inoculation in combating misinformation and suggest that more active inoculation measures should be developed and applied.
Misinformation affects the daily lives of individuals and the functioning of society. During the COVID-19 pandemic, vaccine-related misinformation has reduced the public’s willingness to be vaccinated (Nuwarda et al., 2022), and some patients have adopted incorrect prevention methods such as refusing to wear masks (Aghababaeian et al., 2020) or taking ineffective remedies (Pennycook et al., 2020), such as consuming alcohol. Moreover, misinformation can lead to political polarization (Scheufele & Krause, 2019) and harm democratic institutions (Ecker et al., 2022).
There are various ways to combat misinformation. For example, experts and relevant organizations conduct fact-checks on misinformation (Paynter et al., 2019). Social media platforms have improved their structures and detection systems to reduce the likelihood of misinformation dissemination (Pennycook et al., 2021a, 2021b; Vosoughi et al., 2017). However, the effects of correction often dissipate quickly, and social media cannot block the appearance of misinformation. Therefore, improving individual abilities is a more effective way to address the shortcomings of the current measures. Researchers have effectively increased the public’s resilience to misinformation using online toolkits and media literacy programs (Guess et al., 2020; McGrew, 2020).
Among the methods used to enhance individuals’ ability to cope with misinformation, interventions based on inoculation theory have been widely applied because of their short intervention time and scalability. The inoculation theory suggests that injecting a weakened dose of a virus can activate the production of antibodies, and the same process can be applied in the context of information processing (Ecker et al., 2022). Inoculation interventions have been used to address these problems. The "Bad News" game was designed by Roozenbeek and van der Linden (2019) to enhance resistance to misinformation strategies, Jolley and Douglas (2017) exposed participants to anti-vaccine arguments to improve resistance to anti-vaccine beliefs, and Agley (2021) presented participants with scientific infographics to prevent online misinformation on COVID-19.
With increasing research, inoculation interventions have evolved from primarily focusing on the content of information toward the technique behind the information and also from passive to active inoculation (van der Linden, 2022). This shift could increase the specificity and scalability of interventions. Roozenbeek and van der Linden (2019) applied six techniques to combat misinformation (discredit, conspiracy, trolling, polarization, impersonation, emotion); Basol et al. (2021) applied three techniques to combat COVID-19 related misinformation (fearmongering, fake experts, and conspiracy). Passive inoculation required participants to passively read the rebuttals provided in the inoculation message (Compton & Pfau, 2005). However, during active inoculation, the participants were required to generate their own rebuttals in response to the rebuttals presented in the message. Active inoculation is thought to be more effective than passive inoculation because the "internal" rebuttal used in active inoculation is a more complex cognitive process (Green et al., 2022). Therefore, this process requires the most time and motivation.
Current misinformation interventions are mainly focused on Western countries. Language and cultural differences in the intervention content also result in a lack of evidence regarding the generalizability of the interventions (Kozyreva et al., 2022). This study was conducted in an Eastern country (China), providing further evidence for the globalization of intervention effects. Although some researchers believe that active interventions perform better (Mayer, 2019), there is inadequate evidence comparing the effects of different forms of misinformation interventions (van der Linden, 2022). Exploring the effects of different forms of inoculation can help relevant institutions choose more effective methods based on the actual needs when implementing interventions. With the development of misinformation research, researchers have found that misinformation interventions may damage trust in accurate information (Guess et al., 2020). Interventions can improve the perceived credibility of misinformation by reducing the perceived credibility of accurate information (). When researchers conduct interventions for misinformation, the potential countereffects of the intervention must be considered.
Based on the above questions, this study designed an online game called "Distinguishing Truth from Misinformation" on WeChat, which is one of the largest Chinese social media platforms. This study examined whether a game-based intervention could improve the perceived credibility and sharing intention of misinformation compared to a graphic-based intervention.
For the present study, we tested the following hypotheses:
Participants in both game-based and graphic-based inoculation conditions will report less perceived credibility than those in the control condition with misinformation.
Participants in game-based inoculation conditions will report less perceived credibility than those in the graphic-based inoculation with misinformation.
Two weeks after exposure to the intervention, participants in the game-based and graphic-based inoculation groups will preserve the intervention effect of perceived credibility for misinformation.
Participants in both the game-based and graphic-based inoculations will not report lower perceived credibility than those in the control condition with accurate information.
Participants in both the game-based and graphic-based inoculation conditions will report less sharing intention than those in the control condition with misinformation.
Participants in game-based inoculation conditions will report less sharing intention than those in the graphic-based inoculation with misinformation.
Two weeks after exposure to the intervention, participants in the game-based and graphic-based inoculation groups will preserve the intervention effect of sharing the intention for misinformation.
Participants in both game-based and graphic-based inoculations will not report less sharing intention than in the control condition with accurate information.
The effectiveness of the “Bad News” game designed by Roozenbeek and van der Linden (2019) has been confirmed in different environments. This intervention takes the main techniques used in the “Bad News” game and adds suggestions from mainstream platforms for responding to misinformation. Finally, players experience eight techniques in the game: filter bubbles, emotion, impersonation, fake experts, conspiracy theories, false evidence, sponsors behind, and social media robots. Details are presented in Table 1.
Based on inoculation theory, the game aims to educate players about the importance and urgency of the misinformation problem and provide them with resistance strategies against future encounters with misinformation (Basol et al., 2021). In this five-minute game, players act as misinformation spreaders to add more followers to their virtual accounts. During the game, various communication strategies were set up as different dialog options that the player triggered to spread misinformation. Players learn about online misinformation techniques and their consequences. For ethical reasons, the intervention refers to Greene et al. (2022) suggestion: Before the game started, players were alert that they would read misinformation; after the game, the misinformation would be corrected. Details about the game can be seen in Additional file 1: Intervention game screenshots.
Considering the effect sizes reported in previous inoculation studies (Roozenbeek & van der Linden, 2019), a priori power analysis was conducted with G* power 3.1 using α = 0.05, f = 0.26 and power of 0.90 with repeated-measures ANOVA (Faul et al., 2009). The minimum sample size required was 117 participants. A total of 180 participants were recruited from 29 provinces in China; the main group comprised university students. The participants had a mean age of 21.24 (SD = 1.98); 43% (77) were male, 57% (103) were female, 51% (92) were from rural areas, and 49% (88) were from urban areas.
The measurement materials for misinformation used in this study follow the recommendations of Pennycook and Binnendyk (). Materials were selected from China’s mainstream fact-checking platform (www.piyao.org.cn). As Weibo is currently the primary source of information in China (Zhu et al., 2020), news materials are edited in the form of Weibo articles. Information providers and interactions were blurred to avoid their influence. Misinformation and accurate information materials mainly focused on health and safety information during the COVID-19 pandemic between 2020 and 2022. Following Ecker et al. (2020) method for selecting misinformation, 30 Weibo users were recruited to evaluate the materials on a five-point Likert scale. Materials with the following assessment results were excluded: familiarity scores > 4, credibility scores < 2 or > 4, and emotional intensity scores < 2 or > 4. Thirty materials were included (15 true and 15 false responses).
As shown in Fig. 1, 180 participants were randomly assigned to the game-based inoculation group (n = 60), graphic-based inoculation group (n = 60), or control group (n = 60). The game-based inoculation group played the game designed for this study. The graphic-based inoculation group received the same techniques using graphic materials, simultaneously, as the game-based inoculation. The control group played Tetris simultaneously with the other two groups.
Before the intervention, participants measured the covariates (demographic, media literacy, and cognitive ability). Perceived credibility and sharing intention were assessed at pre-test, post-test, and 2-weeks follow-up. Participants received 10 posts in the form of Weibo (five true and five false) and evaluated the materials using a five-point Likert scale at each stage. After reading each post, each participant was asked, “Is the post above accurate?” (1 = totally not accurate, 5 = very accurate); “Would you consider sharing this post online?” (1 = totally unwilling to share; 5 = very willing to share). Different misinformation materials were presented to participants at different measurement stages. Participants received approximately 1 USD after completing the intervention. Details about the measurement materials can be seen in Additional file 2: Measurement materials.
Perceived credibility: For each post, the perceived credibility was based on the average rating on a five-point Likert scale (1 = totally not accurate, 5 = very accurate). A higher perceived credibility score indicated a higher level of belief in the post.
Sharing intention: For each post, the sharing intention was evaluated using an average rating on a five-point Likert scale (1 = totally unwilling to share, 5 = very willing to share). A higher sharing intention score indicates that people are more likely to share their posts.
Media literacy: Media literacy is a primary factor that influences individuals’ perceptions of misinformation (Su et al., 2022). This study used the media literacy scale developed by Jones-Jang et al. (2021), which consists of four questions. Sample items include: “I would follow the news using multiple media sources,” and “I would contact with news organizations to show my reaction and tell my criticism.” The Cronbach α for media literacy was 0.915.
Cognitive ability: Cognitive ability also influences the susceptibility to misinformation. We used the cognitive ability testing method proposed by Pennycook et al. (2020). The six test questions were designed to elicit automatic and intuitive responses. The score was the number of correct answers given by the participants. The Cronbach α for cognitive ability was 0.902.
The ANOVA test was conducted on the pre-test questionnaire. There were no significant differences among the three groups in terms of misinformation perceived credibility, F (2,177) = 1.584, p = 0.208; misinformation sharing intention, F (2,177) = 1.421, p = 0.244; accurate information perceived credibility, F (2,177) = 0.904, p = 0.407; and accurate information sharing intention, F (2,177) = 0.344, p = 0.709. The differences in the demographic variables between the groups, as shown in Table 2, were also not statistically significant.
Misinformation perceived credibility
To test hypotheses H1, H2, and H3, the present study used a one-factor repeated-measures ANOVA to examine the differences in the perceived credibility of misinformation among different groups at different measurement times. The different intervention forms (game-based, graphic-based, and control groups) were the between-subject factors, and the measurement time (pre-test, post-test, and follow-up) was the within-subject factors. The scores for perceived credibility for different measurement times and intervention forms are shown in Fig. 2.
The result showed a significant interaction between intervention forms and measurement time, F (4, 354) = 2.65, p = 0.033, η2 = 0.014. The main effect of intervention forms was significant, F (2, 177) = 15.63, p < 0.001, η2 = 0.074, and the main effect of measurement time was not significant, F (2, 354) = 2.29, p = 0.103, η2 = 0.006. The results of Tukey’s HSD post-hoc tests showed that, under the post-test condition, the misinformation perceived credibility score of the game-based group was significantly lower than that of the control group (Mdiff = − 0.59, ptukey < 0.001, d = − 0.89, 95% CI [− 1.49, − 0.28]; large effect). The misinformation perceived credibility score of the graphic-based group was not significantly different from that of the control group (Mdiff = − 0.30, ptukey = 0.248, d = − 0.45, 95% CI [− 1.04, 0.14]; small to intermediate effect). The misinformation perceived credibility score of the game-based group was not significantly different from that of the graphic-based group (Mdiff = − 0.30, ptukey = 0.248, d = − 0.45, 95% CI [− 1.04, 0.14]; small to intermediate effect).
During the follow-up measurement after 2 weeks, the misinformation perceived credibility score of the game-based group was significantly lower than that of the control group (Mdiff = − 0.61, ptukey < 0.001, d = − 0.91, 95% CI [− 1.52, − 0.31]; intermediate effect). The graphic-based group was significantly lower than the control group (Mdiff = − 0.40, ptukey = 0.026, d = − 0.61, 95% CI [− 1.20, − 0.01]; intermediate effect).
Accurate information perceived credibility
To test hypothesis H4, a one-factor repeated-measures ANOVA was conducted to examine the differences in the accurate information perceived credibility among different groups, at different measurement times. We did not observe significant effects of the interaction between intervention forms and measurement time (F (4, 354) = 0.40, p = 0.807, η2 = 0.002), the main effect of intervention forms (F (2, 177) = 1.46, p = 0.235, η2 = 0.008), and the main effect of measurement time (F (2, 354) = 1.41, p = 0.246, η2 = 0.004). That means no significant differences in perceived credibility for accurate information between different intervention groups at different measurement times (Fig. 3).
Misinformation sharing intention
To test hypotheses H5, H6, and H7, a one-factor repeated-measures ANOVA was conducted to examine the differences in misinformation sharing intention among different groups at different measurement times.
The result showed a significant interaction between intervention forms and measurement time, F (4, 354) = 2.47, p = 0.044, η2 = 0.011. The main effect of intervention forms was significant, F (2, 177) = 10.49, p < 0.001, η2 = 0.064, and the main effect of measurement time was not significant, F (2, 354) = 2.81, p = 0.062, η2 = 0.006. The scores for sharing intention at different measurement times and intervention forms are shown in Fig. 4. Post-hoc Tukey HSD tests revealed that, under the post-test condition, the misinformation sharing intention scores of the game-based group were significantly lower than those of the control group (Mdiff = − 0.68, ptukey < 0.001, d = − 0.82, 95% CI [− 1.42, − 0.21]; large effect). The graphic-based group was significantly lower than that of the control group (Mdiff = − 0.50, ptukey = 0.028, d = − 0.60, 95% CI [− 1.20, − 0.01]; intermediate effect). There was no significant difference between the game-based and graphic-based groups (Mdiff = − 0.17, ptukey = 0.963, d = − 0.21, 95% CI [− 0.80, 0.38]; small effect).
At the 2-weeks follow-up, the scores for misinformation sharing intention in the game-based group remained significantly lower than in the control group (Mdiff = − 0.60, ptukey = 0.003, d = − 0.73, 95% CI [− 1.33, − 0.13]; intermediate to large effect). The graphic-based group remained significantly lower than the control group (Mdiff = − 0.51, ptukey = 0.025, d = − 0.61, 95% CI [− 1.21, − 0.02]; intermediate effect).
Accurate information sharing intention
To test hypothesis H8, a one-factor repeated-measures ANOVA was conducted to examine the differences in the accurate information sharing intention among different groups at different measurement times. We did not observe significant effects of the interaction between intervention forms and measurement time (F (4, 354) = 0.33, p = 0.860, η2 = 0.001), the main effect of intervention forms (F (2, 177) = 1.75, p = 0.177, η2 = 0.012), and the main effect of measurement time (F (2, 354) = 1.16, p = 0.316, η2 = 0.002). That means no significant differences in sharing intention for accurate information between different intervention groups at different measurement times (Fig. 5).
The present study validated the effectiveness of a game-based inoculation in improving the perceived credibility and sharing intention of misinformation in China. The results support the efficacy of an intervention based on inoculation theory in countering misinformation. After the intervention, participants who accepted both game-based and graphic-based methods showed reduced perceived credibility and sharing intention of misinformation. This means that inoculation can activate people’s alertness to misinformation, allowing them to form more confident refutations of misinformation, thus reducing their acceptance of it and enabling them to understand and reject it better (van der Linden, 2022). In the 2-weeks follow-up, we found that both game-based and graphic-based inoculations had stable effects, similar to previous research findings (Maertens et al., 2021a, 2021b), demonstrating the value of inoculation in countering misinformation.
Additionally, the game-based intervention (d = − 0.89) demonstrated greater effectiveness in improving the perceived credibility of misinformation than the graphic-based intervention (d = − 0.45). This indicates that active inoculation had a better effect than passive inoculation. From a cognitive perspective, a possible reason might be that game-based interventions can better utilize multisensory stimuli such as visual, auditory, and tactile cues to improve learners’ memory and comprehension of information (Petri & Gresse von Wangenheim, 2017).
For the graphic-based intervention, we found that the perceived credibility of misinformation was not significantly different from the control group at the post-test stage. However, a significant decrease in perceived credibility was observed specifically during the follow-up stage, indicating the presence of a sleeper effect associated with passive inoculation. That means passive inoculation may undergo a process of enhancement before decaying. This result is consistent with the view of McGuire's, the proposer of inoculation theory: To enable individuals to develop arguments in defense of their attitude, it was imperative to introduce a time gap between the inoculation treatment and the attack message. This delay facilitated the necessary cognitive processing and response generation (Banas & Rains, 2010). The sleeper effect has also been concerned in attitudes and persuasion change (Kumkale & Albarracín, 2004). However, the perceived credibility of game-based inoculation significantly decreased at both the post-test and follow-up stages, suggesting that active inoculation directly activates immunity in individuals, further demonstrating the advantage of active inoculation over passive inoculation.
This study further examined the effects of inoculation on the perceived credibility and sharing intention of accurate information. The results showed that neither the game-based nor graphic-based intervention affected the perceived credibility and sharing intention of accurate information, indicating that the inoculation intervention did not weaken susceptibility to all information.
In practical applications, the game-based intervention designed in this study has advantages for implementing measures related to misinformation. This online gaming intervention can be implemented in a larger population, and with an increasing number of internet users, the intervention method is more suitable for social media dissemination than traditional offline or online teaching modes in terms of intervention time and scope (van der Linden et al., 2021). Furthermore, this approach can make targeted changes as misinformation changes in form and characteristics and is more likely to attract the public to engage in multiple interventions to actively strengthen the effect.
This study has some limitations. First, regarding the measurement materials, the misinformation measurement tool used in this study lacks standardization, which could lead to difficulties in comparing the results across different studies (Maertens et al., 2021a, 2021b). Therefore, future research should attempt to develop standardized measurement tools from a cross-cultural perspective and expand beyond questionnaire measurements to include behavioral indicators, such as attention time to content and critical information. Second, this study focused on misinformation in images and text commonly found on social media. However, with the explosive growth of short video forms and the widespread use of AI tools like ChatGPT, the prevalence of deep fakes and mixed true–false information is increasing, making it even more challenging to differentiate them (Hwang et al., 2021), future research should also pay attention to different forms of misinformation. Finally, the present study only performed a single follow-up measurement and did not examine the decay trend of intervention durability. Just as the duration of vaccine efficacy determines the timing of revaccination, we need to verify the continuity of changes in effectiveness among subjects after vaccination (Maertens et al., 2021a, 2021b). Future studies should measure the vaccine decay rate in response to inoculation at a single point (Goel et al., 2021).
We cannot correct all misinformation; therefore, it makes sense to help the public prevent the danger of misinformation. Therefore, we designed a game-based inoculation to help the public against misinformation. The results also show that inoculation interventions can effectively deal with misinformation and game-based inoculation is more effective for misinformation perceived credibility. All interventions remained stable after 2 weeks, with no countereffects on the perceived credibility and sharing intention of accurate information.
Availability of data and materials
The data set with potentially identifying information removed and all analysis scripts have been made publicly available via the Open Science Framework (OSF) and can be accessed at: https://osf.io/g5hmb/.
Aghababaeian, H., Hamdanieh, L., & Ostadtaghizadeh, A. (2020). Alcohol intake in an attempt to fight COVID-19: A medical myth in Iran. Alcohol, 88, 29–32. https://doi.org/10.1016/j.alcohol.2020.07.006
Agley, J., Xiao, Y., Thompson, E. E., Chen, X., & Golzarri-Arroyo, L. (2021). Intervening on trust in science to reduce belief in COVID-19 misinformation and increase COVID-19 preventive behavioral intentions: Randomized controlled trial. Journal of Medical Internet Research, 23(10), e32425. https://doi.org/10.2196/32425
Banas, J. A., & Rains, S. A. (2010). A meta-analysis of research on inoculation theory. Communication Monographs, 77(3), 281–311. https://doi.org/10.1080/03637751003758193
Basol, M., Roozenbeek, J., Berriche, M., Uenal, F., McClanahan, W. P., & van der Linden, S. (2021). Towards psychological herd immunity: Cross-cultural evidence for two prebunking interventions against COVID-19 misinformation. Big Data & Society, 8(1), 20539517211013868. https://doi.org/10.1177/20539517211013868
Compton, J. A., & Pfau, M. (2005). Inoculation theory of resistance to influence at maturity: Recent progress in theory development and application and suggestions for future research. Annals of the International Communication Association, 29(1), 97–146. https://doi.org/10.1080/23808985.2005.11679045
Ecker, U. K. H., Lewandowsky, S., & Chadwick, M. (2020). Can corrections spread misinformation to new audiences? Testing for the elusive familiarity backfire effect. Cognitive Research: Principles and Implications, 5(1), 41. https://doi.org/10.1186/s41235-020-00241-6
Ecker, U. K. H., Lewandowsky, S., Cook, J., Schmid, P., Fazio, L. K., Brashier, N., Kendeou, P., Vraga, E. K., & Amazeen, M. A. (2022). The psychological drivers of misinformation belief and its resistance to correction. Nature Reviews Psychology, 1(1), 13. https://doi.org/10.1038/s44159-021-00006-y
Faul, F., Erdfelder, E., Buchner, A., & Lang, A.-G. (2009). Statistical power analyses using G* Power 3.1: Tests for correlation and regression analyses. Behavior Research Methods, 41(4), 1149–1160. https://doi.org/10.3758/brm.41.4.1149
Gabarron, E., Oyeyemi, S. O., & Wynn, R. (2021). COVID-19-related misinformation on social media: A systematic review. Bulletin of the World Health Organization, 99(6), 455-463A. https://doi.org/10.2471/BLT.20.276782
Goel, R. R., Painter, M. M., Apostolidis, S. A., Mathew, D., Meng, W., Rosenfeld, A. M., Lundgreen, K. A., Reynaldi, A., Khoury, D. S., Pattekar, A., Gouma, S., Kuri-Cervantes, L., Hicks, P., Dysinger, S., Hicks, A., Sharma, H., Herring, S., Korte, S., Baxter, A. E., Wherry, E. J., et al. (2021). MRNA vaccines induce durable immune memory to SARS-CoV-2 and variants of concern. Science, 374(6572), abm0829. https://doi.org/10.1126/science.abm0829
Goga, O., Venkatadri, G., & Gummadi, K. P. (2015). The doppelgänger bot attack: Exploring identity impersonation in online social networks. In Proceedings of the 2015 Internet Measurement Conference, 141–153. https://doi.org/10.1145/2815675.2815699
Green, M., McShane, C. J., & Swinbourne, A. (2022). Active versus passive: Evaluating the effectiveness of inoculation techniques in relation to misinformation about climate change. Australian Journal of Psychology, 74(1), 2113340. https://doi.org/10.1080/00049530.2022.2113340
Greene, C., Murphy, G., de Saint Laurent, C., Prike, T., Hegarty, K., & Ecker, U. (2022). Best practices for ethical conduct of misinformation research: A scoping review and critical commentary. European Psychologist. https://doi.org/10.1027/1016-9040/a000491
Guess, A. M., Lerner, M., Lyons, B., Montgomery, J. M., Nyhan, B., Reifler, J., & Sircar, N. (2020). A digital media literacy intervention increases discernment between mainstream and false news in the United States and India. Proceedings of the National Academy of Sciences, 117(27), 15536–15545. https://doi.org/10.1073/pnas.1920498117
Hameleers, M. (2022). Separating truth from lies: Comparing the effects of news media literacy interventions and fact-checkers in response to political misinformation in the US and Netherlands. Information, Communication & Society, 25(1), 110–126. https://doi.org/10.1080/1369118X.2020.1764603
Hwang, Y., Ryu, J. Y., & Jeong, S.-H. (2021). Effects of disinformation using deepfake: the protective effect of media literacy education. Cyberpsychology, Behavior, and Social Networking, 24(3), 188–193. https://doi.org/10.1089/cyber.2020.0174
Jolley, D., & Douglas, K. M. (2017). Prevention is better than cure: Addressing anti-vaccine conspiracy theories. Journal of Applied Social Psychology, 47(8), 459–469. https://doi.org/10.1111/jasp.12453
Jones-Jang, S. M., Mortensen, T., & Liu, J. (2021). Does media literacy help identification of fake news? Information literacy helps, but other literacies don’t. American Behavioral Scientist, 65(2), 371–388. https://doi.org/10.1177/0002764219869406
Kozyreva, A., Lorenz-Spreen, P., Herzog, S., Ecker, U., Lewandowsky, S., & Hertwig, R. (2022). Toolbox of Interventions Against Online Misinformation and Manipulation. PsyArXiv. https://doi.org/10.31234/osf.io/x8ejt
Kumkale, G. T., & Albarracín, D. (2004). The sleeper effect in persuasion: A meta-analytic review. Psychological Bulletin, 130(1), 143–172. https://doi.org/10.1037/0033-2909.130.1.143
Kuru, O., Pasek, J., & Traugott, M. W. (2020). When pundits weigh do expert and partisan critiques in news reports shape ordinary individuals’ interpretations of polls? Mass Communication and Society, 23(5), 628–655. https://doi.org/10.1080/15205436.2020.1774780
Maani, N., van Schalkwyk, M. C. I., Filippidis, F. T., Knai, C., & Petticrew, M. (2022). Manufacturing doubt: Assessing the effects of independent vs industry-sponsored messaging about the harms of fossil fuels, smoking, alcohol, and sugar sweetened beverages. SSM Population Health, 17, 101009. https://doi.org/10.1016/j.ssmph.2021.101009
Maertens, R., Götz, F., Schneider, C. R., Roozenbeek, J., Kerr, J. R., Stieger, S., Iii, W. P. M., Drabot, K., & van der Linden, S. (2021a). The misinformation susceptibility test (MIST): A psychometrically validated measure of news veracity discernment. https://psyarxiv.com/gk68h/
Maertens, R., Roozenbeek, J., Basol, M., & van der Linden, S. (2021b). Long-term effectiveness of inoculation against misinformation: Three longitudinal experiments. Journal of Experimental Psychology: Applied, 27(1), 1. https://doi.org/10.1037/xap0000315
Mayer, R. E. (2019). Computer games in education. Annual Review of Psychology, 70, 531–549. https://doi.org/10.1146/annurev-psych-010418-102744
McGrew, S. (2020). Learning to evaluate: An intervention in civic online reasoning. Computers & Education, 145, 103711. https://doi.org/10.1016/j.compedu.2019.103711
Nguyen, C. T. (2020). Echo chambers and epistemic bubbles. Episteme, 17(2), 141–161. https://doi.org/10.1017/epi.2018.32
Nuwarda, R. F., Ramzan, I., Weekes, L., & Kayser, V. (2022). Vaccine hesitancy: Contemporary issues and historical background. Vaccines, 10(10), 1595. https://doi.org/10.3390/vaccines10101595
Paynter, J., Luskin-Saxby, S., Keen, D., Fordyce, K., Frost, G., Imms, C., Miller, S., Trembath, D., Tucker, M., & Ecker, U. (2019). Evaluation of a template for countering misinformation: Real-world Autism treatment myth debunking. PLoS ONE, 14(1), e0210746. https://doi.org/10.1371/journal.pone.0210746
Pennycook, G., Binnendyk, J., Newton, C., & Rand, D. G. (2021a). A practical guide to doing behavioral research on fake news and misinformation. Collabra-Psychology, 7(1), 25293. https://doi.org/10.1525/collabra.25293
Pennycook, G., Epstein, Z., Mosleh, M., Arechar, A. A., Eckles, D., & Rand, D. G. (2021b). Shifting attention to accuracy can reduce misinformation online. Nature, 592(7855), 590–595. https://doi.org/10.1038/s41586-021-03344-2
Pennycook, G., McPhetres, J., Zhang, Y., Lu, J. G., & Rand, D. G. (2020). Fighting COVID-19 misinformation on social media: Experimental evidence for a scalable accuracy-nudge intervention. Psychological Science, 31(7), 770–780. https://doi.org/10.1177/0956797620939054
Petri, G., & Gresse von Wangenheim, C. (2017). How games for computing education are evaluated? A systematic literature review. Computers & Education, 107, 68–90. https://doi.org/10.1016/j.compedu.2017.01.004
Pummerer, L. (2022). Belief in conspiracy theories and non-normative behavior. Current Opinion in Psychology, 47, 101394. https://doi.org/10.1016/j.copsyc.2022.101394
Roozenbeek, J., & van der Linden, S. (2019). Fake news game confers psychological resistance against online misinformation. Palgrave Communications, 5(1), 65. https://doi.org/10.1057/s41599-019-0279-9
Scheufele, D. A., & Krause, N. M. (2019). Science audiences, misinformation, and fake news. Proceedings of the National Academy of Sciences of the United States of America, 116(16), 7662–7669. https://doi.org/10.1073/pnas.1805871115
Scott, M., Bunce, M., & Wright, K. (2019). Foundation Funding and the Boundaries of Journalism. Journalism Studies, 20(14), 2034–2052. https://doi.org/10.1080/1461670X.2018.1556321
Su, Y., Lee, D. K. L., & Xiao, X. (2022). “I enjoy thinking critically, and I’m in control”: Examining the influences of media literacy factors on misperceptions amidst the COVID-19 infodemic. Computers in Human Behavior, 128, 107111. https://doi.org/10.1016/j.chb.2021.107111
van der Linden, S. (2022). Misinformation: Susceptibility, spread, and interventions to immunize the public. Nature Medicine, 28(3), 460. https://doi.org/10.1038/s41591-022-01713-6
van der Linden, S., Roozenbeek, J., Maertens, R., Basol, M., Kácha, O., Rathje, S., & Traberg, C. S. (2021). How can psychological science help counter the spread of fake news? The Spanish Journal of Psychology. https://doi.org/10.1017/SJP.2021.23
Vosoughi, S., Mohsenvand, M. N., & Roy, D. (2017). Rumor Gauge: Predicting the veracity of rumors on twitter. ACM Transactions on Knowledge Discovery from Data., 11(4), 1–36. https://doi.org/10.1145/3070644
Zerback, T., Töpfl, F., & Knöpfle, M. (2021). The disconcerting potential of online disinformation: Persuasive effects of astroturfing comments and three strategies for inoculation against them. New Media & Society, 23(5), 1080–1098. https://doi.org/10.1177/1461444820908530
Zhu, B., Zheng, X., Liu, H., Li, J., & Wang, P. (2020). Analysis of spatiotemporal characteristics of big data on social media sentiment with COVID-19 epidemic topics. Chaos, Solitons & Fractals, 140, 110123. https://doi.org/10.1016/j.chaos.2020.110123
This work was supported by the Ministry of Education, Humanities and Social Sciences project (Grant Numbers 21YJCZH056).
Ethics approval and consent to participate
The study was reviewed and approved by the institutional ethics committee, and it was conducted in accordance with the ethical standards as laid down in the 1964 Declaration of Helsinki and its later amendments. Written informed consent was obtained from all individual participants included in the study.
Consent for publication
The authors declare that they have no competing interests.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
About this article
Cite this article
Hu, B., Ju, XD., Liu, HH. et al. Game-based inoculation versus graphic-based inoculation to combat misinformation: a randomized controlled trial. Cogn. Research 8, 49 (2023). https://doi.org/10.1186/s41235-023-00505-x
- Game-based intervention
- Graphic-based intervention