- Original article
- Open access
- Published:
To see or not to see: the parallel processing of self-relevance and facial expressions
Cognitive Research: Principles and Implications volume 8, Article number: 70 (2023)
Abstract
The self, like the concept of central "gravity", facilitates the processing of information that is directly relevant to the self. This phenomenon is known as the self-prioritization effect. However, it remains unclear whether the self-prioritization effect extends to the processing of emotional facial expressions. To fill this gap, we used a self-association paradigm to investigate the impact of self-relevance on the recognition of emotional facial expressions while controlling for confounding factors such as familiarity and overlearning. Using a large and diverse sample, we replicated the effect of self-relevance on face processing but found no evidence for a modulation of self-relevance on facial emotion recognition. We propose two potential theoretical explanations to account for these findings and emphasize that further research with different experimental designs and a multitasks measurement approach is needed to understand this mechanism fully. Overall, our study contributes to the literature on the parallel cognitive processing of self-relevance and facial emotion recognition, with implications for both social and cognitive psychology.
Introduction
As highly social beings, humans have to deal with huge amounts of information in their social interactions, both from themselves and from others. One of the earliest robust findings in cognitive psychology indicates that self-related information is preferentially processed over other kinds of information (Rogers et al., 1977). For instance, it is easier to recognize one's own name (Bargh, 1982), self-voices (Candini et al., 2014) and self-body parts (Frassinetti et al., 2011) compared to those of others. However, while there is broad evidence suggesting biased processing of self-related information, literature pointed at methodological weaknesses in research on the self in general, which suggests that the effect may be driven by the effect of learning self-related information over a long period of time, such as one's name and face (Sui & Gu, 2017). Thus, a troublesome familiarity confound underlies the interpretation of the self-prioritization effect. As a result, it is impossible to disentangle the cause of self-related information from the effect of familiarity and overlearning. Over the last decade, numerous researchers have found that this bias towards self over others occurs not only for consolidated information over the long term, but also for information that is temporarily associated with the self, even within the last few minutes (Sui et al., 2012). Using a learning approach to associate the self with unfamiliar novel things temporarily, researchers have elegantly ruled out the confounding influence of familiarity and overlearning (Lee et al., 2021; Sui et al., 2015).
To distinguish this prioritized processing from self-related information in general, here we use the term “self-relevance” to indicate the benefit of processing information that is temporarily related to the self. A study by Sui et al. (2012) provided the first direct empirical evidence for this new approach, named the self-association paradigm. Participants were asked to associate social labels (e.g., self or other) with neutral geometric shapes in the instruction. A subsequent perception-matching task indicated that shapes associated with self-labels were judged faster and more accurately than those associated with other labels, even if this association was wholly temporal. This approach has been conceptually replicated and combined with different specific tasks across different cognitive domains, such as attention (Dalmaso et al., 2019), decision-making (Sui et al., 2016), and action control (Desebrock et al., 2018; Frings & Wentura, 2014). This growing body of evidence reflects the high degree of the malleability of self-relevance.
Facial expressions of emotion belong to the most crucial cues in social interactions (Van Kleef, 2009). In fact, research has shown that more than 50% of emotional information is transmitted through facial expressions during social communication (Lapakko, 1997). From a developmental perspective on social cognition, it is widely accepted that the processing of self-related information is related to the processing of emotional facial expressions (Happé et al., 2017). In addition, evidence from various sub-disciplines, including evolutionary psychology (Conway et al., 2019; Gonzalez-Liencres et al., 2013), clinical psychology (Uddin, 2011; Williams, 2010), social psychology (Ma & Han, 2010), and cognitive neuroscience (Northoff, 2016; Scheller & Sui, 2022), suggests a potential relationship between the self and the processing of emotional facial expressions. Taken together, these studies suggest that the self plays a vital role in the socio-cognitive processing of emotional facial expressions.
Despite these effects of self-relevance on cognition (Sui & Humphreys, 2017), the potential impact of self-relevance on emotional expression has received relatively little attention. A convenient forward citation search of the paper by Sui et al. (2012) using Google Scholar yielded only six publications out of 316 citations related to emotional facial expressions. Most of these six studies provide only indirect evidence of how self-relevance affects the processing of emotional facial expressions. For instance, in Constable et al. (2021) and McIvor et al. (2021), participants associated social labels with happy or sad expressions and performed a perceptual matching task with the label-drawing pairs thereafter. Although more accurate recognition of self-associated facial expressions was found as compared to those associated with other labels, these findings could not directly support the claim that self-relevance improves facial expression processing. This is because the perceptual matching task measures the magnitude of self-association only, rather than the cognitive performance to process emotional expressions (Woźniak & Knoblich, 2019). Despite the fact that these studies provide some insight into the relationship between self- and facial emotion-related processing, further direct evidence is needed by using a specific task that measures facial expression recognition.
Recent studies (Payne et al., 2017; Woźniak & Hohwy, 2020; Woźniak & Knoblich, 2019; Woźniak et al., 2018) have replicated the biased processing in the case of self-relevant facial stimuli. These studies extended the evidence on prioritized self-associative processing to the domain of facial stimuli, showing that self-association with an unfamiliar face can improve performance on a perceptual matching task of the same faces. Furthermore, by means of event-related potentials (ERP), Woźniak et al. (2018) found that the perception of self-associated previously unfamiliar faces led to the same modulation of facial processing-related ERPs as the perception of one's own face. This result was interpreted not only as evidence for the formation of self-relevance with these faces but also as a support for the idea that the self-relevance could directly enhance facial processing, which is an essential stepping-stone for the development of facial emotion recognition (Happé et al., 2017). Given that face processing and facial emotion recognition are highly correlated abilities (Hildebrandt et al., 2015), exploring the influence of self-relevance on facial emotion recognition using the self-association paradigm is particularly meaningful. In addition, some of the previous studies have been criticized given their low power and convenience samples. For example, the largest sample size in the above-mentioned studies was 31, and nearly all participants were students. To remedy this problem it has been recommended to collect larger samples of participants with diverse backgrounds (Camerer et al., 2018). As a first goal, we thus aimed to replicate the effect of self-association using facial stimuli in a larger and more diverse sample.
Usually, six basic emotions are studied in facial emotion recognition research: Happiness, Surprise, Fear, Sadness, Disgust, and Anger. Cross-cultural research has shown that these prototypical expressions can be accurately identified and distinguished from each other (Elfenbein & Ambady, 2002). However, several studies mentioned above have investigated the relationship between self-relevance and facial emotion processing only on some of these basic emotion categories. For example, Cunningham et al. (2022) only examined faces expressing anger. Some studies used more than one emotion, mainly happiness and sadness expressions (Feldborg et al., 2021; McIvor et al., 2021; Stolte et al., 2017a; Yankouskaya & Sui, 2021). Indeed, happiness and sadness are the two expressions at the opposite ends of the positive–negative valence spectrum (Bimler & Kirkland, 2001). However, there is evidence of a more fine-graded emotion category-related specificity in emotion recognition (Kirita & Endo, 1995; Kirouac & Doré, 1983; Wells et al., 2016). Studies showed different accuracy and speed levels when processing different facial expressions of emotion. For example, there is a large literature suggesting that happy faces are more accurately recognized than other facial expressions (e.g., Kirita & Endo, 1995; Stolte et al., 2017b; Svard et al., 2012), while fear expressions are difficult to recognize and are often confused with sadness, given the overlapping facial action units between these expressions (Guarnera et al., 2015). Thus, previous studies' generalizability may be limited by their narrow focus on a few emotion categories. Methodological studies long recommended using all basic emotion categories when measuring facial emotion recognition ability (O’Sullivan & Ekman, 2004). Therefore, for a more complete picture, we here aim to investigate whether self-association influences the processing of facial expressions of emotion across all basic emotion categories.
Accordingly, the aim of our study was twofold. First, we attempted to replicate the experiment of self-relevance on facial processing using a large and diverse sample. To achieve this goal, we recruited participants with diverse demographic backgrounds over an online crowd-working platform. Previous validation studies have demonstrated that the data quality obtained from online crowd-working platforms is comparable to (Armitage & Eerola, 2020), or even better than (Hauser & Schwarz, 2016) those collected in a lab. We used a perceptual matching task to examine the effect of self-relevance using facial stimuli and followed the procedure used in previous studies (see details below). Given replication success, we expect that after the association learning, participants will have a more accurate and faster response to the faces associated with the self, as compared to those with other labels.
Given the role of self-related information processing and emotional facial expression processing in social communication (Bayer et al., 2017; Lee et al., 2023), we aim to investigate the potential influence of self-relevance on the recognition of emotional expressions. Specifically, we investigate whether the effect of self-relevance extends beyond mere facial processing to influence subsequent facial emotion recognition. To comprehensively investigate this association, we used a facial expression recognition paradigm with emotional composite faces of all six basic emotions (see below). This paradigm has been repeatedly used as a measure of emotion expression recognition performance (Calder et al., 2000; Durand et al., 2007; Hildebrandt et al., 2015; McKendrick et al., 2016; Meaux & Vuilleumier, 2016; Tanaka et al., 2012; Wilhelm et al., 2014). We hypothesized a more accurate and faster response towards the emotional expressions displayed by faces associated with a self-label as compared to those with other labels. We further expected a difference between emotion categories in line with the above-elaborated category specificity in emotion recognition ability. Finally, we anticipate an interaction between self-relevance and emotional categories.
Method
Participants
The data reported in this study were collected from 302 adult participants enrolled in a larger study investigating socio-emotional abilities and self-concept. All participants were recruited via the Prolific platform (www.prolific.co) in August 2021. To be eligible for participation, individuals were required to be currently residing in the UK, possess a near-native level of English knowledge, and report normal or corrected-to-normal vision. Three participants were excluded due to incomplete responses. Therefore, the final sample consisted of N = 299 participants, with 44% identifying as female, 54% as male, and 2% identifying as non-binary. The mean age of the sample was 32.14 years (SD = 11.29, range from 18 to 75), and the participants had a reasonably heterogeneous educational background: 26.76% held a high school degree, 55.18% held an associate or bachelor's degree, and 18.06% held a degree higher than a bachelor's degree. The study was reviewed and approved by the Committee of Ethics of the [Double Blind for the review process]. All participants provided informed consent and received a monetary compensation of 8.5 pounds for their participation.
Stimulus material
All face photographs were taken from a study conducted by Wilhelm and colleagues (2014) and consisted of eight models (four biological females and four biological males). Additionally, the photograph of an additional model was used to create stimuli for the practice trials. None of the models had any distinctive features, such as makeup, piercings, or glasses, and all models were photographed under identical lighting and background conditions for consistency. To ensure the emotional salience of the stimuli, all photos of emotional expressions were evaluated and selected by trained researchers, additionally using the FaceReader software, as detailed in Wilhelm et al. (2014). Each photo was then uniformly cropped by fitting it into a vertical ellipse of 300 by 200 pixels to eliminate non-facial cues such as clothing and hair.
Procedure
The experiment was created and hosted using the Gorilla Experiment Builder (Anwyl-Irvine et al., 2020), and participants completed the study using their own laptops or desktop computers. It consisted of three parts. In the first one, participants underwent a learning phase to memorize the associations between the neutral unknown faces and the self vs. other labels. Following this, a perceptual matching task was administered, similar to those used in previous studies investigating self-association using facial stimuli. In the third part, participants were asked to complete a specific task to measure their facial emotion recognition performance, namely the recognition task with emotional composite faces. The procedure is illustrated in detail in Fig. 1. The entire experiment lasted approximately 2 h. After completing the tasks mentioned above, participants were additionally asked to complete self-report measures of personality, as well as several ability measures of social cognition, which are beyond the scope of this study.
Learning phase
During the learning phase, participants were asked to associate unknown neutral faces with social labels ("You" or "Stranger"). All faces, with the associated social labels written below them, were presented on the screen one by one in random order. Participants were given 15 s to learn each face-label pairing, a timing chosen to match the one used in previous studies on self-association. In contrast to previous studies that only used one face for each label, we applied four facial models (two males and two females) counterbalanced with each social label. Therefore, participants were asked to associate themselves with four different facial identities, while associating the social label "stranger" with another set of four different facial identities. We did so in order to reduce potential confounding from a specific facial model. Each face-label pairing was repeated twice to reduce the potential memory load associated with the learning task of eight different models. Detailed instructions were shown before the beginning of the learning phase, using a practice face to ensure that participants understood the procedure.
Perceptual matching task
In line with previous studies, in this task participants were required to judge whether a label and a facial model displayed in a sequence matched according to what they had learned during the learning phase, or whether the label and the facial identity did not match. To ensure that participants learned the identity of the facial models and not just other features of the photographs, we used not only the photographs that were presented during the learning phase but also new photographs of the same face models with neutral expressions. These new photographs were cropped according to the same procedure as the original photographs. The only difference between the new and original photographs was a slight change in the light and visual angle (smaller than 1 degree). Each of the eight matching pairings was presented four times (two using the original photographs and two using the new photographs). Each of the eight mismatching pairs was presented four times as well. In total, this task thus consisted of 64 trials. Prior to the task, a practice trial was administered to ensure participants understood the procedure.
Each trial began with a fixation cross for 400 ms, then a face image presentation for 800 ms, followed by a delay period of 1 s. After the delay period, one of the labels ("You" or "Stranger") was displayed until participants responded using two potential response keys on the keyboard ("f" and "j"). We used the pronoun "You" here because previous studies have used this word also and showed that there was no significant difference in the pattern of results when using the pronoun "Me" or "You" (Woźniak & Hohwy, 2020). After pressing a key, visual feedback for the response (correct or incorrect) was presented, lasting 3 s. Participants were instructed to respond as quickly and accurately as possible, and the maximum response time was 5 s. If participants responded more slowly than 3 s, they received feedback to encourage quicker responses in the next trial.
Facial emotion recognition task with composite faces
As described in the introduction, we used a facial expression recognition paradigm with emotional composite faces to measure the emotion expression recognition performance. In this task, participants had to identify the emotion in an emotional composite face presented on one of the face halves (top vs. bottom) while ignoring the other half, which served to induce interference and increase task difficulty. These emotional composite faces were created by aligning the top and bottom halves of faces with different expressions, taken from the same person.
In line with previous studies, to avoid ceiling effects due to the unequal distribution of discriminative information between the upper and lower parts of the face for certain emotions, fear, sadness, and anger were only used in the upper part, while disgust, happiness, and surprise were only used in the lower part (Durand et al., 2007; Hildebrandt et al., 2015; Wilhelm et al., 2014). This resulted in nine possible composites of each model being used in the experiment. Examples of composite faces are provided in Fig. 2.
The task was a 2AFC (two-alternative forced choice) task, where participants had to press one of two keys on the keyboard (“f” and “j”) to indicate the emotion of the upper or lower halves, respectively. The targeted halves were indicated using the label “Top” or “Bottom”, displayed simultaneously on the screen with the emotional composite face. Each trial began with a fixation cross in the center of the screen for 1 s, followed by the emotional composite face and the target word, which remained on the screen until the participant responded. If participants did not respond within six seconds after stimulus onset, they were encouraged to respond more quickly in the next trial.
To counterbalance the two target words (“Top” and “Bottom”) and the nine different emotional composites, each composite face was presented twice, once for the upper half and once for the lower half. This resulted in a total of 144 trials, which were presented in two randomized blocks of 72 trials each, in random order. Participants were allowed to take a break between the blocks. To ensure that they understood the task, each block started with a practice block consisting of nine practice trials, using a practice facial model with feedback provided.
Data analyses
The data analysis was conducted using R version 3.5.1. The code for the data analysis can be found on the website for this project at [Double Blind for the review process].
Our analysis targeted accuracy and response times (RTs) for both the perceptual matching task and the emotional expression recognition task with composite faces. To ensure that only participants who were attentive during the learning of self-association were included, we analyzed only those who responded correctly on at least 60% of the trials, which was significantly better than random guessing. We removed trials with RTs shorter than 100 ms assuming implausible cognitive processing in such a short time. Following the recommendation of Berger and Kiefer (2021), we applied an exclusion method based on z-scores of RTs to remove within-person outliers for each task separately. This procedure resulted in the exclusion of 1.82% of trials for the perceptual matching task and 5.28% of trials for the emotional expression recognition task, which are considered acceptable in comparison to previous studies in this domain.
Statistical analysis in the frequentist framework
Linear mixed models (LMMs) and generalized linear mixed models (GLMMs) were applied separately to predict response accuracy and RTs in each task. The LMMs were fitted using the lmer function from the lmerTest 3.1.3 package, while the GLMMs were fitted using the glmmTMB function from the glmmTMB 1.1.3 package. The RTs of correct trials were modelled using LMMs with a log transformation, although untransformed RTs yielded similar results. The accuracy of each trial was modelled using GLMMs with a logit link function. LMMs are more flexible than traditional repeated measures ANOVAs, as they relax the strict statistical assumptions of ANOVAs and result in a more precise estimation of standard errors of regression coefficients (Boisgontier & Cheval, 2016).
In the perceptual matching task, we used sum contrasts to code the two fixed factors, matching (matching or mismatching pairs based on the label and face) and association (with self or stranger), and their interaction in the (G)LMMs. Similarly, for the facial emotion recognition task, we used sum contrasts to code the two fixed factors, association (with self or stranger) and emotion categories (Happiness, Surprise, Fear, Sadness, Disgust, Anger), and their interactions in the (G)LMMs. The ANOVA-like omnibus tests for main effects and interaction are reported for all predictors, and p-values are computed based on Type III Wald tests. Post-hoc pairwise tests were conducted using the lsmeans function from the emmeans 1.7.3 package with Holm-Bonferroni adjustments.
Due to the independence of the trials from the same participant and using the same facial model, we started with a crossed random effects structure for both participant and facial models, following the recommendation of Baayan et al. (2008). In order to assess the degree to which variance was explained by each random effect structure, ICC coefficients were calculated for the random effect structure by-participant and by-facial models in the null model (without any fixed factors). The ICC coefficients indicated that there was no substantial variation of both RTs and accuracy within identical facial models (< 0.1%), demonstrating that adding the random structure for facial models was not necessary (McNabb & Murayama, 2021). Therefore, we opted to include only the random effects structure by-participant. The random slopes of all predictors and random intercepts were determined using backward model selection according to the likelihood-ratio test (Matuschek et al., 2017). The model reduction procedure started with the full model with random intercept and random slopes for all fixed factors (Barr et al., 2013). We defined a set of reduced models by excluding one of the random slopes. One reduced model was selected when the result of the likelihood-ratio test was not significant compared to a more complex model. The model reduction procedure was repeated until a more complex model was selected or all the random slopes were excluded. Models that failed to converge were not considered in this procedure.
Additional analyses using Bayesian methods
A common critique of frequentist null-hypothesis significance testing (NHST) is that researchers often fail to obtain evidence that supports the null hypothesis (Dienes, 2014). As a consequence, evidence of no effect and data that is insufficient to detect an effect cannot be distinguished. One of the great benefits of Bayesian analysis is that it provides an estimate of how strongly the empirical results support either the null or the alternative hypothesis (Nathoo & Masson, 2016). Here, we performed additional Bayesian analyses to complement the results achieved by frequentist analysis.
Following the approach of Muth et al. (2018), we fitted Bayesian LMMs and Bayesian GLMMs separately for RTs and accuracy. We used the stan_lmer and stan_glmer functions in the rstanarm 2.21.3 package and specified the same random effect structure as the best model from the model selection procedure in the frequentist analysis. For the prior distributions, we used an unbiased weakly informative prior, which is equivalent to L2 regularization. To evaluate the strength of evidence for or against the entire fixed factor instead of each contrast coding, we used a model comparison approach to compare models including one fixed factor with models without that factor, similar to forward regression. For example, for self-association, we compared the model with this factor against the null model, and for the interaction between self-association and the matching factor, we compared the model with the interaction term against the model without the interaction. Therefore, we defined a set of models by including one of the fixed effects of factors in the null model. We sampled the joint posterior distribution for each model by running sixteen Monte Carlo Markov Chains (MCMCs) at 8000 iterations. The first half of the samples were discarded as warm-up samples. All models had \(\hat{R}\) values lower than 1.1 (Gelman & Rubin, 1992), and all chains mixed and reached stationary distributions by visual inspection, indicating that the models converged well. Because the Bayesian analysis was intended to supplement the weakness of the NHST in null hypothesis testing, we calculated the Bayes Factors (\({\text{BF}}_{01}\)) as output, which indicates the ratio of the marginal likelihoods under the null hypothesis (excluding the factor, H0) and the alternative hypothesis (including the factor, H1) based on the data from this study. Based on Jeffreys's (1998) widely used evidence quality scale, a \({\text{BF}}_{01}\) > 3 indicates substantial evidence in favor of H0, and a \({\text{BF}}_{01}\) > 100 shows decisive evidence for H0.
Results
Perceptual matching task
After applying the backward model selection procedure with the likelihood-ratio test, we arrived at the following final LMM for RT in the perceptual matching task, which was specified as \(\ln \,{\text{RTs }}\sim {\text{matching}} * {\text{association }} + (1 + {\text{matching}} + {\text{association}} | {\text{participant }})\). The ANOVA-like omnibus tests of the predictors revealed significant main effects of matching, \(\chi^{2} \left( 1 \right)\) = 168.37, p < 0.001, and association, \(\chi^{2} \left( 1 \right)\) = 126.80, p < 0.001, as well as a significant interaction between the two factors, \(\chi^{2} \left( 1 \right)\) = 112.75, p < 0.001. Additional Bayesian model comparison revealed very small Bayes factors (\({\text{BF}}_{01}\) < 0.01) for all fixed factors and their interaction, suggesting that the data decisively supported the existing effects of all factors and their interaction, which confirms the result from the frequentist LMM.
Follow-up simple-effect analyses (Fig. 3A) showed that RTs for the self-label were significantly quicker than those for the stranger-label, regardless of matching trails, p < 0.001, or mismatching trials, p = 0.032. However, the difference in RTs was larger for the matching trials (difference of \(\ln {\text{RTs}}\) = 0.20) than for the mismatching trials (difference of \(\ln {\text{RTs}}\) = 0.03).
Regarding accuracy in the perceptual matching task, the final GLMM after the selection was \({\text{ACC}} \sim {\text{matching}} * {\text{association}} + (1 + {\text{matching}} + {\text{association}} | {\text{participant }})\). The results showed that both fixed-effect factors and their interaction were significant, with matching, \(\chi^{2} \left( 1 \right)\) = 21.02, p < 0.001, and association, \(\chi^{2} \left( 1 \right)\) = 14.59, p < 0.001, having a significant effect and the interaction term, \(\chi^{2} \left( 1 \right)\) = 8.76, p = 0.003, being significant as well. The complementary Bayesian GLMMs comparison revealed a similar pattern, with the Bayes factor associated with the factor matching decisively supporting the alternative hypothesis (\({\text{BF}}_{01}\) < 0.01), indicating strong evidence of the difference between matching and mismatching trials. Although there was a significant difference in accuracy between the self-label and other-label trials, the Bayesian analysis provided weak evidence only (\({\text{BF}}_{01}\) = 0.45), indicating that the data just slightly favored this difference over no difference. Regarding the interaction term, the evidence was weaker (\({\text{BF}}_{01}\) = 0.72), although it suggested the existence of an interaction effect (Table 1).
Post-hoc comparisons revealed higher accuracy (Fig. 3B) for the self-labeled compared to the stranger-labeled faces, but only within the matching trials, p < 0.001, and not within the mismatching trials, p = 0.183, which is consistent with previous studies.
Facial emotion recognition task with composite faces
Regarding response times in the emotional expression recognition task, the model selection procedure identified the following final LMM \(\ln {\text{RTs }}\sim {\text{association }}*{\text{emotion categories}} + (1 + {\text{association}} + {\text{emotion categories}} | {\text{participant}} )\). The only significant main effect was that of emotion categories, \(\chi^{2} \left( 5 \right)\) = 814.76, p < 0.001. Confirming the results of the frequentist analysis, the Bayes factor was very small, \({\text{BF}}_{01}\) < 0.01, indicating strong evidence in favor of including the main effect of emotion. Post-hoc analysis (Fig. 4A) revealed that happy expressions were recognized most quickly (\(M_{{{\text{RTs}}}}\) = 1895 ms), followed by disgust (\(M_{{{\text{RTs}}}}\) = 1991 ms), anger (\(M_{{{\text{RTs}}}}\) = 2022 ms), and surprise expressions (\(M_{{{\text{RTs}}}}\) = 2031 ms), while fear (\(M_{{{\text{RTs}}}}\) = 2323 ms) and sadness (\(M_{{{\text{RTs}}}}\) = 2348 ms) were recognized more slowly than all other emotions. These results are consistent with many previous studies on facial emotion recognition (e.g., Mancini et al., 2018). However, the main effect of association (\(\chi^{2} \left( 1 \right)\) = 2.92, p = 0.087) and its interaction with emotion categories (\(\chi^{2} \left( 5 \right)\) = 1.29, p = 0.936) were all not significant. The Bayesian model comparison also provided strong evidence supporting the null hypothesis, with \({\text{BF}}_{01}\) > 100 for both the effects of self-association and its interaction with emotion categories.
Accuracy in this task was further tested using the GLMM. After model selection, the final retained model was \({\text{ACC }}\sim {\text{ association }}*{\text{emotion categories}} + (1 + {\text{ association}} + {\text{emotion categories}} |{\text{ participant}})\). Results again showed a significant main effect of emotion categories, \(\chi^{2} \left( 5 \right)\) = 648.89, p < 0.001. Bayes factor model comparison decisively supported the alternative hypothesis, with \({\text{BF}}_{01}\) < 0.01, suggesting differences in accuracy between different emotion categories. A similar pattern to that seen in the response time results emerged in post-hoc analysis (Fig. 4B): happy expressions were recognized most accurately (\(M_{{{\text{ACC}}}}\) = 0.86), followed by anger (\(M_{{{\text{ACC}}}}\) = 0.84), disgust (\(M_{{{\text{ACC}}}}\) = 0.81), and surprise expressions(\(M_{{{\text{ACC}}}}\) = 0.79), whereas sadness (\(M_{{{\text{ACC}}}}\) = 0.70) and fear(\(M_{{{\text{ACC}}}}\) = 0.62) were recognized less accurately. Again, there was no significant main effect of association (\(\chi^{2} \left( 1 \right)\) = 1.04, p = 0.307) or interaction between self-association and emotion categories (\(\chi^{2} \left( 5 \right)\) = 8.40, p = 0.135). With the complementary Bayesian approach, we found strong evidence (\({\text{BF}}_{01}\) > 100) in favor of the null hypothesis for both the main effect of the fixed factor association and its interaction with different emotion categories (Table 2).
Discussion
Processing self-related information and emotional facial expressions are both essential to human social interaction. However, while numerous studies have focused on self-related information, few have explored how self-relevance influences facial emotion recognition. To eliminate the potential influence of confounding factors such as familiarity and overlearning in self-related research, we implemented a self-association paradigm to investigate how self-relevance influences the cognitive processing of facial expressions of emotion. Given previous methodological criticism regarding the familiarity effect of stimulus and the strong relationship between face identity and facial emotion processing, our first goal was to replicate the self-association paradigm to be used to extend self-relevance to facial information processing (Woźniak & Knoblich, 2019), using a large and diverse sample. Our second goal was to examine whether self-relevance would also modulate the processing of emotional facial expressions in a composite faces paradigm with six emotion categories.
Self-relevance in the domain of face processing
As hypothesized, our results successfully replicated the effect of self-relevance on face processing reported in previous studies (Woźniak & Knoblich, 2019). In the perceptual matching task, both frequentist and Bayesian analyses consistently showed that participants reacted more accurately and quickly when an unknown facial stimulus was associated with their self-label in the matching trials, indicating that self-relevance facilitates face processing. Our study provides new, robust evidence for this phenomenon, given the large and diverse sample studied. Furthermore, we asked participants to evaluate multiple face models (four models per association condition) to test whether the self-relevance for face processing can be generalized across multiple stimuli. This goes beyond previous studies which often used only one face model per association condition (e.g., Payne et al., 2017).
Similar to previous studies, we did not observe a significant effect of self-relevance on accuracy in the mismatching trials. It is also in line with the literature that the effect size of self-relevance on reaction times was smaller in the mismatching trials compared to the matching trials. One explanation for these findings is that the unfamiliarity of the social label ("stranger" compared to "self") may have suppressed the effect of self-relevance (Woźniak & Knoblich, 2019). For example, studies have shown that the effect of self-relevance is weaker when using a foreign language social label compared to a native language label (Ivaz et al., 2016, 2019). However, as previous studies emphasized, the suppression effect of the social label familiarity does not negate the effect of self-relevance in matching trails, because the prioritization of self-associative processing can also be observed even without any social label (Lee et al., 2021; Woźniak & Knoblich, 2019). Here, we demonstrated a significant effect of self-association even when accounting for the variance explained by matching or mismatching trials. This indicates the robustness of evidence on self-prioritization in face processing.
On the specificity of emotion categories
Regarding the facial emotion recognition task, our results replicate the specificity of recognizing emotions of different categories. This specificity has been observed in studies using different multimodal psychophysiological data, such as electromyogram activity (Künecke et al., 2014), brain blood flow (Fusar-Poli et al., 2009), and ERPs (Recio et al., 2014). Specifically, we found that happiness was perceived most accurately and quickly, whereas fear and sadness were perceived less accurately and more slowly than other facial emotion expressions. These findings are consistent with previous studies that have used the same task (Calder et al., 2000; Durand et al., 2007) or other tasks to measure emotion recognition from faces (Wilhelm et al., 2014). Taken together, our findings support the methodological recommendation of O'Sullivan and Ekman (2004) to use stimuli from a variety of different emotion categories, rather than focusing on only one or two, when measuring facial emotion recognition performance.
One significant limitation of our study is that we only included one measurement paradigm of facial emotion recognition. Although the measures in our study encompassed all emotion categories, which is an improvement compared to previous research, the use of only one task limits the generalizability of our findings. In experimental research focusing on individual differences, the performance of a specific task is usually decomposed into task- and construct-specific sources of variance (Schmiedek et al., 2009). It is assumed that a change in the task-specific source of variance could lead to a different conclusion regarding the psychological construct of interest. To rule out this possibility, methodologists recommend using multiple cognitive tasks to minimize the influence of task-specific sources of variance (Schmiedek et al., 2014). A previous multivariate study summarized sixteen different tasks to measure facial emotion recognition (Wilhelm et al., 2014). Thus, a crucial next step in this study would be to use multiple tasks to measure emotion recognition and its relationship to the self.
Self-relevance on the recognition of facial expressions
Contrary to our hypothesis, we found no evidence that associating the self with an unfamiliar face altered the recognition of facial emotion expressions. Surprisingly, we observed the same non-significant results regardless of whether we used accuracy or response times as an indicator. Furthermore, the large Bayes factor supporting the null hypothesis indicates that the absence of self-prioritization in the domain of emotion recognition cannot be attributed to a lack of statistical power or insufficient sample size. We also found non-significance and a Bayes factor strongly supporting the null hypothesis in the test of the interaction term between self-association and emotion categories, indicating the same pattern across all emotions. Thus, following the successful association of the self with unfamiliar faces, participants did not perceive any emotional expression displayed by the faces associated with self-label more quickly or more accurately than those labeled as “stranger”.
Our results do not conceptually replicate previous findings regarding self-relevance in the processing of facial emotion expressions. Previous studies have shown that self-association can prioritize the processing of happy faces in perceptual matching tasks (Constable et al., 2021; McIvor et al., 2021), a phenomenon known as self-positivity bias (Herbert et al., 2013). One possible methodological explanation for our contradictory results is the different experimental designs used in our study and previous studies. As discussed in the introduction, previous studies may have inherent design flaws because integrating emotional expressions in perceptual matching tasks instead of using an additional special measure, which can lead to ambiguous interpretations (Siebold et al., 2015). Higher performance in the perceptual matching task with emotional expressions, like the previous studies, can be attributed either to preferential processing of emotional stimuli or to a stronger association between emotional faces and the self. While some researchers may argue that the same explanation can be applied to interpret our findings of facial processing (the first goal in our study), we argue that previous studies have ruled out this possibility by analyzing facial processing-related ERP in the perceptual matching task (Woźniak et al., 2018).
Another possible methodological explanation for the observed discrepancy may be the intrinsic characteristics of this study's facial emotion recognition task. Although the facial emotion recognition paradigm used in this study has very good psychometric properties in the context of other tasks as well (Hildebrandt et al., 2015; Wilhelm et al., 2014), the composite face requires participants to view two emotional expressions simultaneously, which leads to interference induced by the distracting emotional expression. Furthermore, the standard procedure which was designed to overcome potential ceiling effects in recognizing prototypical expressions (Wilhelm et al., 2014) restricted specific emotions to fixed top or bottom positions, arguably limiting our opportunity to fully explore interactive relationships between emotion placements. Therefore, it is worthwhile to replicate the present study using alternative facial emotion recognition paradigms which challenge different processing mechanisms, for example the Emotion Hexagon test (Wilhelm et al., 2014).
Secondly, as mentioned above, considerable evidence supports the self-positivity bias, suggesting that self-relevance enhances the recognition of positive facial expressions. In the task we used, the studied emotion categories were predominantly negative (e.g., sadness, fear, disgust, and anger) rather than positive (e.g., happiness) (An et al., 2017), which could potentially confound the results when using the composite face as a stimulus, given that happiness stimuli were combined with a negative expression in the upper part of the face. This is because the negative facial expressions could partially suppress the boosting effect of self-relevance on the positive facial expressions. However, the post hoc analysis allows us to at least partly rule out this possibility. If the composition of positive and negative facial expressions confounded the effect of self-relevance, an interaction effect would likely occur, as the composition of two negative facial expressions should yield worse performance than the positive–negative composition (a condition in which the self-positivity bias would at least partially occur). However, no significant interaction was observed, and the Bayes factor supported the null hypothesis. Statistically, this finding indicates that the effect of self-relevance on the recognition of different facial emotions remained consistent across conditions. Again, future research is needed to apply multiple tasks to further evaluate these effects.
Two possible theoretical explanations
Beyond the potential methodological explanations discussed above, two possible theoretical explanations can be considered as well to account for the evidence provided in this study supporting a rather parallel processing of self-relevance and facial emotion recognition.
The first possible theoretical explanation revolves around the differentiation between processing self-associated facial information and facial expressions of emotion. According to the prominent Bruce and Young's model of facial information processing (see Calder & Young, 2005), general face processing involves several stages: structural encoding, the establishment of face recognition units, person identity nodes, and semantic information units (Burton et al., 1990). Notably, emotion expression recognition shares only the initial stage (facial structural coding) and then dissociates from general face perception according to the model (Calder & Young, 2005). This dissociation has been supported by evidence from brain injury patients (Bruyer et al., 1983; Tranel et al., 1988; Young et al., 1993), by functional brain imaging (Sergent et al., 1994), and more recent larger individual differences research with a multitasks approach (Hildebrandt et al., 2015). Therefore, although self-relevance is known to modulate cognitive processing at an early stage (Humphreys & Sui, 2016), emotional expression processing might not benefit from this due to its separate processing route.
To further elucidate this explanation, we draw upon previous evidence from ERP studies. While research has demonstrated that the effect of self-relevance can be detected in the very early stage in the non-facial domain (Sui et al., 2023), findings from the facial domain indicated that self-associated faces are differentiated from other-associated faces only after 200 to 300 ms (Żochowska et al., 2021). This relatively late self-other discrimination in facial processing can be attributed to the access of person identity nodes and semantic information units in face processing. In contrast, many studies have reported that the amplitude difference between emotional prototypes can be found before 200 ms or more (Luo et al., 2010; Recio et al., 2014), despite some conflicting evidence. Thus, while self-association of faces can accelerate general face processing in-person identity nodes and semantic information units, emotion expression processing remains unaffected, as the separate route for facial expression has already recognized the emotion expressions. In essence, our findings can be interpreted as follows: self-relevance influenced the person identity nodes and semantic information units of self-associated faces but did not impact the recognition of different emotion expressions displayed by these faces. This might be because the routes of invariant vs. expression-related facial information processing are only overlapping in the early stages. Consequently, this explanation accounts for both the lack of evidence supporting a difference in emotion expression recognition between self-associated and other-associated faces in our study and the boosting effect of self-relevance on face processing, as demonstrated by Woźniak and Hohwy (2020) and Woźniak and Knoblich (2019).
However, the present findings provide no direct evidence for this theoretical explanation, as we relied solely on response accuracy and response times as indicators and they cannot provide detailed information on the stages of cognitive processing (Heitz, 2014). This limitation has motivated researchers to employ different cognitive and psychophysiological techniques, such as eye-tracking (Siebold et al., 2015), and ERPs (Schreiter et al., 2019; Woźniak et al., 2018) and should be considered in future research to address the above theoretical view. Two high-feasibility modeling approaches to behavioral data might be beneficial in disentangling the underlying psychological processes during emotion expression recognition as well. These are the drift–diffusion model (Stafford et al., 2020), or mouse tracking (Scherbaum & Dshemuchadse, 2020). In the future, both techniques could be used to explore the different stages in emotional expression processing, allowing for more direct evidence to test the above theoretical explanation.
The second theoretical explanation pertains to the complex structure of the self. While the distinct route for facial emotion processing provides a reasonable justification for why self-relevance may not influence the recognition of expressions, it remains difficult to reconcile this with the abundant evidence of a close relationship between the self and emotional facial expression processing. For instance, a body of literature suggests that participants recognize facial expressions better when their own faces are used as stimuli (Li & Tottenham, 2013). Although the familiarity and overlearning of stimuli may explain this finding, an alternative reason could be the different structure of the self. According to the consensus of self-related research, the self, as a complex structure, has different conceptualizations, including the “bodily” self and the “conceptual” self (Farmer & Tsakiris, 2012).
While self-relevance is a powerful tool for exploring processing biases toward self-related information, it is primarily applied to changes in the level of the “conceptual” self (Maister & Farmer, 2016). Neuroimaging studies have shown that the self-association paradigm recruits the ventromedial prefrontal cortex (vmPFC), which is more closely related to the conceptual self-related neural network (Humphreys & Sui, 2016) rather than the bodily self-related neural network (Tsakiris, 2010). In line with previous research indicating that self-association with an unknown face can alter facial representation at the conceptual level (Woźniak et al., 2018) but not at the bodily level (Payne et al., 2017), our study found facial processing was enhanced by self-association. However, unlike general face perception, the simulation theory of facial expression recognition suggests that successful emotion recognition from faces requires the activation of the sensorimotor cortex (Wood et al., 2016), which is a part of the bodily self-related neural network (Tsakiris, 2010) and more closely associated with the bodily self (Farmer & Tsakiris, 2012). This is supported by research demonstrating improved facial expression recognition performance through bodily self-manipulation using the enfacement paradigm (Maister et al., 2013). Therefore, it is understandable that we did not find evidence supporting the role of self-relevance in emotion recognition from faces because only the conceptual self was manipulated. Similar to the first explanation, the discrepancy between our findings and previous studies could also be reconciled within the same theoretical explanation.
However, like the first explanation, this explanation is not without its limitations. Some researchers may argue that the conceptual self and bodily self can co-influence each other. Previous research supports this argument, showing that manipulating the bodily self can affect the conceptual self and vice versa (Farmer & Tsakiris, 2012; Porciello et al., 2018). Therefore, it is possible that even a change in the conceptual self, such as in our study, could lead to similar effects as a change in the bodily self. However, our study's results suggest that this bidirectional relationship does not always hold. The conditions under which bidirectional relationships occur, and when they do not, remain unclear. It is possible that the relationship between the conceptual self and the bodily self is context-dependent, and that certain factors, such as the type of task or emotional stimuli used, may influence the direction and strength of this relationship (Porciello et al., 2018). Therefore, a future direction would be to use both self-association and enfacement paradigms to manipulate both the conceptual and bodily self and re-examine their influence on facial emotion recognition. This could help clarify the conditions under which bidirectional relationships occur and whether they are consistent across different contexts. Additionally, it could provide a more comprehensive understanding of the relationship between the conceptual and bodily self and their respective roles in emotion processing.
Conclusion
Our study contributes to the understanding of how self-relevance influences the cognitive processing of facial expressions of emotion. In a large and diverse sample, by means of the self-association paradigm, we replicated the effect of self-relevance on face processing but did not find evidence to support that self-relevance influences facial emotion recognition performance. Two possible theoretical explanations were proposed to account for the lack of evidence, but further research with extended experimental designs and more comprehensive measures is necessary to fully understand these. Overall, our study adds to the literature on self and facial emotion processing, highlighting the need for further research to better understand the complex interplay between these two.
Availability of data and materials
The datasets generated and the code for the data analysis can be found on the Open Science Framework (OSF) website for this project at https://osf.io/4n6j7/
Abbreviations
- ERPs:
-
Event-related potentials
- 2AFC:
-
Two-alternative forced choice
- RTs:
-
Response times
- LMMs:
-
Linear mixed models
- GLMMs:
-
Generalized linear mixed models
- ANOVA:
-
Analysis of variance
- NHST:
-
Null-hypothesis significance testing
- BF:
-
Bayes factor
- MCMCs:
-
Monte Carlo Markov Chains
- vmPFC:
-
Ventromedial prefrontal cortex
References
An, S., Ji, L.-J., Marks, M., & Zhang, Z. (2017). Two sides of emotion: Exploring positivity and negativity in six basic emotions across cultures. Frontiers in Psychology, 8, 610. https://doi.org/10.3389/fpsyg.2017.00610
Anwyl-Irvine, A. L., Massonnié, J., Flitton, A., Kirkham, N., & Evershed, J. K. (2020). Gorilla in our midst: An online behavioral experiment builder. Behavior Research Methods, 52(1), 388–407. https://doi.org/10.3758/s13428-019-01237-x
Armitage, J., & Eerola, T. (2020). Reaction time data in music cognition: comparison of pilot data from lab, crowdsourced, and convenience web samples. Frontiers in Psychology. https://doi.org/10.3389/fpsyg.2019.02883
Baayen, R. H., Davidson, D. J., & Bates, D. M. (2008). Mixed-effects modeling with crossed random effects for subjects and items. Journal of Memory and Language, 59(4), 390–412. https://doi.org/10.1016/j.jml.2007.12.005
Bargh, J. A. (1982). Attention and automaticity in the processing of self-relevant information. Journal of Personality and Social Psychology, 43, 425–436. https://doi.org/10.1037/0022-3514.43.3.425
Barr, D. J., Levy, R., Scheepers, C., & Tily, H. J. (2013). Random effects structure for confirmatory hypothesis testing: Keep it maximal. Journal of Memory and Language, 68(3), 255–278. https://doi.org/10.1016/j.jml.2012.11.001
Bayer, M., Ruthmann, K., & Schacht, A. (2017). The impact of personal relevance on emotion processing: Evidence from event-related potentials and pupillary responses. Social Cognitive and Affective Neuroscience, 12(9), 1470–1479. https://doi.org/10.1093/scan/nsx075
Berger, A., & Kiefer, M. (2021). Comparison of different response time outlier exclusion methods: A simulation study. Frontiers in Psychology, 12, 675558. https://doi.org/10.3389/fpsyg.2021.675558
Bimler, D., & Kirkland, J. (2001). Categorical perception of facial expressions of emotion: Evidence from multidimensional scaling. Cognition and Emotion, 15(5), 633–658. https://doi.org/10.1080/02699930126214
Boisgontier, M. P., & Cheval, B. (2016). The anova to mixed model transition. Neuroscience & Biobehavioral Reviews, 68, 1004–1005. https://doi.org/10.1016/j.neubiorev.2016.05.034
Bruyer, R., Laterre, C., Seron, X., Feyereisen, P., Strypstein, E., Pierrard, E., & Rectem, D. (1983). A case of prosopagnosia with some preserved covert remembrance of familiar faces. Brain and Cognition, 2(3), 257–284. https://doi.org/10.1016/0278-2626(83)90014-3
Burton, A. M., Bruce, V., & Johnston, R. A. (1990). Understanding face recognition with an interactive activation model. British Journal of Psychology, 81(3), 361–380. https://doi.org/10.1111/j.2044-8295.1990.tb02367.x
Calder, A. J., & Young, A. W. (2005). Understanding the recognition of facial identity and facial expression. Nature Reviews Neuroscience. https://doi.org/10.1038/nrn1724
Calder, A. J., Young, A. W., Keane, J., & Dean, M. (2000). Configural information in facial expression perception. Journal of Experimental Psychology: Human Perception and Performance, 26(2), 527–551. https://doi.org/10.1037/0096-1523.26.2.527
Camerer, C. F., Dreber, A., Holzmeister, F., Ho, T.-H., Huber, J., Johannesson, M., Kirchler, M., Nave, G., Nosek, B. A., Pfeiffer, T., Altmejd, A., Buttrick, N., Chan, T., Chen, Y., Forsell, E., Gampa, A., Heikensten, E., Hummer, L., Imai, T., & Wu, H. (2018). Evaluating the replicability of social science experiments in Nature and Science between 2010 and 2015. Nature Human Behaviour. https://doi.org/10.1038/s41562-018-0399-z
Candini, M., Zamagni, E., Nuzzo, A., Ruotolo, F., Iachini, T., & Frassinetti, F. (2014). Who is speaking? Implicit and explicit self and other voice recognition. Brain and Cognition, 92, 112–117. https://doi.org/10.1016/j.bandc.2014.10.001
Constable, M. D., Becker, M. L., Oh, Y.-I., & Knoblich, G. (2021). Affective compatibility with the self modulates the self-prioritisation effect. Cognition and Emotion, 35(2), 291–304. https://doi.org/10.1080/02699931.2020.1839383
Conway, J. R., Catmur, C., & Bird, G. (2019). Understanding individual differences in theory of mind via representation of minds, not mental states. Psychonomic Bulletin & Review, 26(3), 798–812. https://doi.org/10.3758/s13423-018-1559-x
Cunningham, S. J., Vogt, J., & Martin, D. (2022). Me first? Positioning self in the attentional hierarchy. Journal of Experimental Psychology: Human Perception and Performance, 48, 115–127. https://doi.org/10.1037/xhp0000976
Dalmaso, M., Castelli, L., & Galfano, G. (2019). Self-related shapes can hold the eyes. Quarterly Journal of Experimental Psychology, 72(9), 2249–2260. https://doi.org/10.1177/1747021819839668
Desebrock, C., Sui, J., & Spence, C. (2018). Self-reference in action: Arm-movement responses are enhanced in perceptual matching. Acta Psychologica, 190, 258–266. https://doi.org/10.1016/j.actpsy.2018.08.009
Dienes, Z. (2014). Using Bayes to get the most out of non-significant results. Frontiers in Psychology. https://doi.org/10.3389/fpsyg.2014.00781
Durand, K., Gallay, M., Seigneuric, A., Robichon, F., & Baudouin, J.-Y. (2007). The development of facial emotion recognition: The role of configural information. Journal of Experimental Child Psychology, 97(1), 14–27. https://doi.org/10.1016/j.jecp.2006.12.001
Elfenbein, H. A., & Ambady, N. (2002). On the universality and cultural specificity of emotion recognition: A meta-analysis. Psychological Bulletin, 128, 203–235. https://doi.org/10.1037/0033-2909.128.2.203
Farmer, H., & Tsakiris, M. (2012). The bodily social self: A link between phenomenal and narrative selfhood. Review of Philosophy and Psychology, 3(1), 125–144. https://doi.org/10.1007/s13164-012-0092-5
Feldborg, M., Lee, N. A., Hung, K., Peng, K., & Sui, J. (2021). Perceiving the self and emotions with an anxious mind: Evidence from an implicit perceptual task. International Journal of Environmental Research and Public Health, 18(22), 12096. https://doi.org/10.3390/ijerph182212096
Frassinetti, F., Ferri, F., Maini, M., Benassi, M. G., & Gallese, V. (2011). Bodily self: An implicit knowledge of what is explicitly unknown. Experimental Brain Research, 212(1), 153–160. https://doi.org/10.1007/s00221-011-2708-x
Frings, C., & Wentura, D. (2014). Self-priorization processes in action and perception. Journal of Experimental Psychology: Human Perception and Performance, 40(5), 1737–1740. https://doi.org/10.1037/a0037376
Fusar-Poli, P., Placentino, A., Carletti, F., Landi, P., Allen, P., Surguladze, S., Benedetti, F., Abbamonte, M., Gasparotti, R., Barale, F., Perez, J., McGuire, P., & Politi, P. (2009). Functional atlas of emotional faces processing: A voxel-based meta-analysis of 105 functional magnetic resonance imaging studies. Journal of Psychiatry and Neuroscience, 34(6), 418–432.
Gelman, A., & Rubin, D. B. (1992). Inference from iterative simulation using multiple sequences. Statistical Science, 7(4), 457–472.
Gonzalez-Liencres, C., Shamay-Tsoory, S. G., & Brüne, M. (2013). Towards a neuroscience of empathy: Ontogeny, phylogeny, brain mechanisms, context and psychopathology. Neuroscience & Biobehavioral Reviews, 37(8), 1537–1548. https://doi.org/10.1016/j.neubiorev.2013.05.001
Guarnera, M., Hichy, Z., Cascio, M. I., & Carrubba, S. (2015). Facial expressions and ability to recognize emotions from eyes or mouth in children. Europe’s Journal of Psychology, 11(2), 183–196. https://doi.org/10.5964/ejop.v11i2.890
Happé, F., Cook, J. L., & Bird, G. (2017). The structure of social cognition: In(ter)dependence of Sociocognitive Processes. Annual Review of Psychology, 68(1), 243–267. https://doi.org/10.1146/annurev-psych-010416-044046
Hauser, D. J., & Schwarz, N. (2016). Attentive Turkers: MTurk participants perform better on online attention checks than do subject pool participants. Behavior Research Methods, 48(1), 400–407. https://doi.org/10.3758/s13428-015-0578-z
Heitz, R. P. (2014). The speed-accuracy tradeoff: History, physiology, methodology, and behavior. Frontiers in Neuroscience. https://doi.org/10.3389/fnins.2014.00150
Herbert, C., Sfaerlea, A., & Blumenthal, T. (2013). Your emotion or mine: Labeling feelings alters emotional face perception—an ERP study on automatic and intentional affect labeling. Frontiers in Human Neuroscience. https://doi.org/10.3389/fnhum.2013.00378
Hildebrandt, A., Sommer, W., Schacht, A., & Wilhelm, O. (2015). Perceiving and remembering emotional facial expressions—A basic facet of emotional intelligence. Intelligence, 50, 52–67. https://doi.org/10.1016/j.intell.2015.02.003
Humphreys, G. W., & Sui, J. (2016). Attentional control and the self: The Self-Attention Network (SAN). Cognitive Neuroscience, 7(1–4), 5–17. https://doi.org/10.1080/17588928.2015.1044427
Ivaz, L., Costa, A., & Duñabeitia, J. A. (2016). The emotional impact of being myself: Emotions and foreign-language processing. Journal of Experimental Psychology: Learning, Memory, and Cognition, 42, 489–496. https://doi.org/10.1037/xlm0000179
Ivaz, L., Griffin, K. L., & Duñabeitia, J. A. (2019). Self-bias and the emotionality of foreign languages. Quarterly Journal of Experimental Psychology, 72(1), 76–89. https://doi.org/10.1177/1747021818781017
Jeffreys, S. H., & Jeffreys, S. H. (1998). The Theory of Probability (3rd ed.). Oxford University Press.
Kirita, T., & Endo, M. (1995). Happy face advantage in recognizing facial expressions. Acta Psychologica, 89(2), 149–163. https://doi.org/10.1016/0001-6918(94)00021-8
Kirouac, G., & Doré, F. Y. (1983). Accuracy and latency of judgment of facial expressions of emotions. Perceptual and Motor Skills, 57(3), 683–686. https://doi.org/10.2466/pms.1983.57.3.683
Künecke, J., Hildebrandt, A., Recio, G., Sommer, W., & Wilhelm, O. (2014). Facial EMG responses to emotional expressions are related to emotion perception ability. PLoS ONE, 9(1), e84053. https://doi.org/10.1371/journal.pone.0084053
Lapakko, D. (1997). Three cheers for language: A closer examination of a widely cited study of nonverbal communication. Communication Education, 46(1), 63–67. https://doi.org/10.1080/03634529709379073
Lee, N. A., Martin, D., & Sui, J. (2021). A pre-existing self-referential anchor is not necessary for self-prioritisation. Acta Psychologica, 219, 103362. https://doi.org/10.1016/j.actpsy.2021.103362
Lee, N. A., Martin, D., & Sui, J. (2023). Accentuate the positive: Evidence that context dependent self-reference drives self-bias. Cognition, 240, 105600. https://doi.org/10.1016/j.cognition.2023.105600
Li, Y. H., & Tottenham, N. (2013). Exposure to the self-face facilitates identification of dynamic facial expressions: Influences on individual differences. Emotion, 13, 196–202. https://doi.org/10.1037/a0030755
Luo, W., Feng, W., He, W., Wang, N.-Y., & Luo, Y.-J. (2010). Three stages of facial expression processing: ERP study with rapid serial visual presentation. NeuroImage, 49(2), 1857–1867. https://doi.org/10.1016/j.neuroimage.2009.09.018
Ma, Y., & Han, S. (2010). Why we respond faster to the self than to others? An implicit positive association theory of self-advantage during implicit face recognition. Journal of Experimental Psychology: Human Perception and Performance, 36, 619–633. https://doi.org/10.1037/a0015797
Maister, L., & Farmer, H. (2016). Attending to the bodily self. Cognitive Neuroscience, 7(1–4), 28–29. https://doi.org/10.1080/17588928.2015.1075490
Maister, L., Tsiakkas, E., & Tsakiris, M. (2013). I feel your fear: Shared touch between faces facilitates recognition of fearful facial expressions. Emotion, 13(1), 7–13. https://doi.org/10.1037/a0030884
Mancini, G., Biolcati, R., Agnoli, S., Andrei, F., & Trombini, E. (2018). Recognition of facial emotional expressions among Italian pre-adolescents, and their affective reactions. Frontiers in Psychology, 9, 1303. https://doi.org/10.3389/fpsyg.2018.01303
Matuschek, H., Kliegl, R., Vasishth, S., Baayen, H., & Bates, D. (2017). Balancing type I error and power in linear mixed models. Journal of Memory and Language, 94, 305–315. https://doi.org/10.1016/j.jml.2017.01.001
McIvor, L., Sui, J., Malhotra, T., Drury, D., & Kumar, S. (2021). Self-referential processing and emotion context insensitivity in major depressive disorder. European Journal of Neuroscience, 53(1), 311–329. https://doi.org/10.1111/ejn.14782
McKendrick, M., Butler, S. H., & Grealy, M. A. (2016). The effect of self-referential expectation on emotional face processing. PLoS ONE, 11(5), e0155576. https://doi.org/10.1371/journal.pone.0155576
McNabb, C. B., & Murayama, K. (2021). Unnecessary reliance on multilevel modelling to analyse nested data in neuroscience: When a traditional summary-statistics approach suffices. Current Research in Neurobiology, 2, 100024. https://doi.org/10.1016/j.crneur.2021.100024
Meaux, E., & Vuilleumier, P. (2016). Facing mixed emotions: Analytic and holistic perception of facial emotion expressions engages separate brain networks. NeuroImage, 141, 154–173. https://doi.org/10.1016/j.neuroimage.2016.07.004
Muth, C., Oravecz, Z., & Gabry, J. (2018). User-friendly Bayesian regression modeling: A tutorial with rstanarm and shinystan. The Quantitative Methods for Psychology, 14, 99–119.
Nathoo, F. S., & Masson, M. E. J. (2016). Bayesian alternatives to null-hypothesis significance testing for repeated-measures designs. Journal of Mathematical Psychology, 72, 144–157. https://doi.org/10.1016/j.jmp.2015.03.003
Northoff, G. (2016). Is the self a higher-order or fundamental function of the brain? The “basis model of self-specificity” and its encoding by the brain’s spontaneous activity. Cognitive Neuroscience, 7(1–4), 203–222. https://doi.org/10.1080/17588928.2015.1111868
O’Sullivan, M., & Ekman, P. (2004). The wizards of deception detection. In The detection of deception in forensic contexts (pp. 269–286). Cambridge University Press. https://doi.org/10.1017/CBO9780511490071.012
Payne, S., Tsakiris, M., & Maister, L. (2017). Can the self become another? Investigating the effects of self-association with a new facial identity. The Quarterly Journal of Experimental Psychology, 70(6), 1085–1097. https://doi.org/10.1080/17470218.2015.1137329
Porciello, G., Bufalari, I., Minio-Paluello, I., Di Pace, E., & Aglioti, S. M. (2018). The ‘Enfacement’illusion: A window on the plasticity of the self. Cortex, 104, 261–275. https://doi.org/10.1016/j.cortex.2018.01.007
Recio, G., Schacht, A., & Sommer, W. (2014). Recognizing dynamic facial expressions of emotion: Specificity and intensity effects in event-related brain potentials. Biological Psychology, 96, 111–125. https://doi.org/10.1016/j.biopsycho.2013.12.003
Rogers, T. B., Kuiper, N. A., & Kirker, W. S. (1977). Self-reference and the encoding of personal information. Journal of Personality and Social Psychology, 35, 677–688. https://doi.org/10.1037/0022-3514.35.9.677
Scheller, M., & Sui, J. (2022). The power of the self: Anchoring information processing across contexts. Journal of Experimental Psychology: Human Perception and Performance, 48, 1001–1021. https://doi.org/10.1037/xhp0001017
Scherbaum, S., & Dshemuchadse, M. (2020). Psychometrics of the continuous mind: Measuring cognitive sub-processes via mouse tracking. Memory & Cognition, 48(3), 436–454. https://doi.org/10.3758/s13421-019-00981-x
Schmiedek, F., Hildebrandt, A., Lövdén, M., Wilhelm, O., & Lindenberger, U. (2009). Complex span versus updating tasks of working memory: The gap is not that deep. Journal of Experimental Psychology: Learning, Memory, and Cognition, 35, 1089–1096. https://doi.org/10.1037/a0015730
Schmiedek, F., Lövdén, M., & Lindenberger, U. (2014). A task is a task is a task: Putting complex span, n-back, and other working memory indicators in psychometric context. Frontiers in Psychology, 5, 1475. https://doi.org/10.3389/fpsyg.2014.01475
Schreiter, M. L., Chmielewski, W. X., Mückschel, M., Ziemssen, T., & Beste, C. (2019). How the depth of processing modulates emotional interference – evidence from EEG and pupil diameter data. Cognitive, Affective, & Behavioral Neuroscience, 19(5), 1231–1246. https://doi.org/10.3758/s13415-019-00732-0
Sergent, J., Ohta, S., Macdonald, B., & Zuck, E. (1994). Segregated processing of facial identity and emotion in the human brain: A pet study. Visual Cognition, 1(2–3), 349–369. https://doi.org/10.1080/13506289408402305
Siebold, A., Weaver, M. D., Donk, M., & van Zoest, W. (2015). Social salience does not transfer to oculomotor visual search. Visual Cognition, 23(8), 989–1019. https://doi.org/10.1080/13506285.2015.1121946
Stafford, T., Pirrone, A., Croucher, M., & Krystalli, A. (2020). Quantifying the benefits of using decision models with response time and accuracy data. Behavior Research Methods, 52(5), 2142–2155. https://doi.org/10.3758/s13428-020-01372-w
Stolte, M., Humphreys, G., Yankouskaya, A., & Sui, J. (2017a). Dissociating biases towards the self and positive emotion. Quarterly Journal of Experimental Psychology, 70(6), 1011–1022. https://doi.org/10.1080/17470218.2015.1101477
Stolte, M., Humphreys, G., Yankouskaya, A., & Sui, J. (2017b). Dissociating biases towards the self and positive emotion. The Quarterly Journal of Experimental Psychology, 70(6), 1011–1022. https://doi.org/10.1080/17470218.2015.1101477
Sui, J., & Gu, X. (2017). Self as object: Emerging trends in self research. Trends in Neurosciences, 40(11), 643–653. https://doi.org/10.1016/j.tins.2017.09.002
Sui, J., He, X., Golubickis, M., Svensson, S. L., & Neil Macrae, C. (2023). Electrophysiological correlates of self-prioritization. Consciousness and Cognition, 108, 103475. https://doi.org/10.1016/j.concog.2023.103475
Sui, J., He, X., & Humphreys, G. W. (2012). Perceptual effects of social salience: Evidence from self-prioritization effects on perceptual matching. Journal of Experimental Psychology: Human Perception and Performance, 38, 1105–1117. https://doi.org/10.1037/a0029792
Sui, J., & Humphreys, G. W. (2017). The ubiquitous self: What the properties of self-bias tell us about the self. Annals of the New York Academy of Sciences, 1396(1), 222–235. https://doi.org/10.1111/nyas.13197
Sui, J., Ohrling, E., & Humphreys, G. W. (2016). Negative mood disrupts self- and reward-biases in perceptual matching. The Quarterly Journal of Experimental Psychology, 69(7), 1438–1448. https://doi.org/10.1080/17470218.2015.1122069
Sui, J., Yankouskaya, A., & Humphreys, G. W. (2015). Super-capacity me! Super-capacity and violations of race independence for self- but not for reward-associated stimuli. Journal of Experimental Psychology: Human Perception and Performance, 41, 441–452. https://doi.org/10.1037/a0038288
Svard, J., Wiens, S., & Fischer, H. (2012). Superior recognition performance for happy masked and unmasked faces in both younger and older adults. Frontiers in Psychology. https://doi.org/10.3389/fpsyg.2012.00520
Tanaka, J. W., Kaiser, M. D., Butler, S., & Le Grand, R. (2012). Mixed emotions: Holistic and analytic perception of facial expressions. Cognition & Emotion, 26(6), 961–977. https://doi.org/10.1080/02699931.2011.630933
Tranel, D., Damasio, A. R., & Damasio, H. (1988). Intact recognition of facial expression, gender, and age in patients with impaired recognition of face identity. Neurology, 38(5), 690–690. https://doi.org/10.1212/WNL.38.5.690
Tsakiris, M. (2010). My body in the brain: A neurocognitive model of body-ownership. Neuropsychologia, 48(3), 703–712. https://doi.org/10.1016/j.neuropsychologia.2009.09.034
Uddin, L. Q. (2011). The self in autism: An emerging view from neuroimaging. Neurocase, 17(3), 201–208. https://doi.org/10.1080/13554794.2010.509320
Van Kleef, G. A. (2009). How emotions regulate social life: the emotions as social information (EASI) model. Current Directions in Psychological Science, 18(3), 184–188. https://doi.org/10.1111/j.1467-8721.2009.01633.x
Wells, L. J., Gillespie, S. M., & Rotshtein, P. (2016). Identification of emotional facial expressions: effects of expression, intensity, and sex on eye gaze. PLoS ONE, 11(12), e0168307. https://doi.org/10.1371/journal.pone.0168307
Wilhelm, O., Hildebrandt, A., Manske, K., Schacht, A., & Sommer, W. (2014). Test battery for measuring the perception and recognition of facial expressions of emotion. Frontiers in Psychology. https://doi.org/10.3389/fpsyg.2014.00404
Williams, D. (2010). Theory of own mind in autism: Evidence of a specific deficit in self-awareness? Autism, 14(5), 474–494. https://doi.org/10.1177/1362361310366314
Wood, A., Rychlowska, M., Korb, S., & Niedenthal, P. (2016). Fashioning the face: Sensorimotor simulation contributes to facial expression recognition. Trends in Cognitive Sciences, 20(3), 227–240. https://doi.org/10.1016/j.tics.2015.12.010
Woźniak, M., & Hohwy, J. (2020). Stranger to my face: Top-down and bottom-up effects underlying prioritization of images of one’s face. PLoS ONE, 15(7), e0235627. https://doi.org/10.1371/journal.pone.0235627
Woźniak, M., & Knoblich, G. (2019). Self-prioritization of fully unfamiliar stimuli. Quarterly Journal of Experimental Psychology, 72(8), 2110–2120. https://doi.org/10.1177/1747021819832981
Woźniak, M., Kourtis, D., & Knoblich, G. (2018). Prioritization of arbitrary faces associated to self: An EEG study. PLoS ONE, 13(1), e0190679. https://doi.org/10.1371/journal.pone.0190679
Yankouskaya, A., & Sui, J. (2021). Self-positivity or self-negativity as a function of the medial prefrontal cortex. Brain Sciences. https://doi.org/10.3390/brainsci11020264
Young, A. W., Newcombe, F., de Haan, E. H. F., Small, M., & Hay, D. C. (1993). Face perception after brain injury: Selective impairments affecting identity and expression. Brain, 116(4), 941–959. https://doi.org/10.1093/brain/116.4.941
Żochowska, A., Nowicka, M. M., Wójcik, M. J., & Nowicka, A. (2021). Self-face and emotional faces—Are they alike? Social Cognitive and Affective Neuroscience, 16(6), 593–607. https://doi.org/10.1093/scan/nsab020
Acknowledgements
The first author and corresponding author of this article, Tuo Liu, is now affiliated with the Goethe-Universität Frankfurt am Main. The research for this article was conducted while Tuo Liu was employed at Carl von Ossietzky Universität Oldenburg.
Funding
Open Access funding enabled and organized by Projekt DEAL. This research was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) within the SPP 2134, projects HI 1780/5-1 and ZA 592/5-1.
Author information
Authors and Affiliations
Contributions
TL: Conceptualization, Methodology, Investigation, Formal analysis, Data Curation, Visualization, Writing—Original Draft, AH: Validation, Resources, Writing—Review and Editing, Supervision, Project administration, Funding acquisition. JS: Conceptualization, Software, Writing—Review and Editing, Supervision. All authors read and approved the final version of the manuscript.
Corresponding author
Ethics declarations
Ethics approval and consent to participate
The study was reviewed and approved by the Committee of Ethics of the German Psychological Society (DGPs, reference number: AH 082018). All participants provided informed consent.
Consent for publication
Not applicable.
Competing interests
The authors declare that they have no competing interests.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Liu, T., Sui, J. & Hildebrandt, A. To see or not to see: the parallel processing of self-relevance and facial expressions. Cogn. Research 8, 70 (2023). https://doi.org/10.1186/s41235-023-00524-8
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s41235-023-00524-8