Skip to main content
Fig.Ā 1 | Cognitive Research: Principles and Implications

Fig.Ā 1

From: Sound localization in noisy contexts: performance, metacognitive evaluations and head movements

Fig.Ā 1

A Setting: schematic representation of the participant wearing the head-mounted display (HMD) and holding the virtual reality (VR) controller during the head-pointing sound localization task. The nine spheres in front of the participant indicate predetermined speaker positions (not visible in the HMD). In the bottom-right part of the figure the participantā€™s perspective: They were in an empty room and were instructed to locate the controller at the end of the audio track with the small sphere in the position in which they think the sound source was: B experimental procedure: at left, exposure phase: 3 blocks comprising a total of 12 trials. In each block, participants experienced different noisy contexts: nature, in green; traffic, in gray and cocktail party, in coral. During each trial, participants listened to target speech embedded in one of the three possible noisy contexts. At the end of each trial, they were asked to evaluate their effort and self-efficacy using a Likert scale. At right, sound localization phase: In each block, participants experienced different noisy contexts: nature, in green; traffic, in gray and cocktail party, in coral. Blocks in this phase were presented in random order. During each trial, participants listened to target speech embedded in one of the three possible noisy contexts. At the end of the sound, they were instructed to localize the source by using a handheld controller to move a light-blue sphere. They were told to adjust the size of the sphere once they were sure it covered the target position. Afterward, they had to rate how much effort and confidence they felt using a Likert scale. C Graphical representation of indices describing head-related behavior: extent of head rotation, number of reversals, and approaching index

Back to article page