Our aim was to use a naturalistic museum setting to test the influence of severe peripheral field restriction on spatial learning while navigating. This approach stands in contrast to much of the previous work examining low-vision mobility and navigation, which has used regular hallway environments that are not always representative of the complexity of real-world navigational challenges. Previous work in structured hallways suggests that while navigating through a novel space by people with visual impairment is cognitively demanding, spatial learning may be supplemented by the additional body-based cues associated with walking and turning in regular and predictable ways (Barhorst-Cates et al., 2016). In the current study, participants completed a spatial learning task in an art museum wearing goggles that simulated severe peripheral field restriction (10° FOV) compared to completing the task with mild FOV restriction (60° FOV). We found support for our predictions that spatial memory errors (revealed by pointing to remembered landmarks) and cognitive load would increase in the narrow compared to the wide FOV condition. We argue that the presence of a spatial learning deficit in the current experiment with this level of FOV restriction, unlike Barhorst-Cates et al. (2016), is influenced by the context of the environment and complexity of the required navigational paths.
The increase in spatial memory error seen in the narrow versus wide FOV condition in the museum context may have been due to a combination of visual, motor, and attentional influences on learning, related to the museum path structure. The visual components of navigating with field loss make the task more visually demanding, recruiting more head/eye movements to integrate reduced views of the scene. Additionally, the information provided by physical movement may be less helpful to rely on than in simpler environments, which would have a greater detrimental effect when visual information is reduced. Finally, attentional demands on a navigator are increased both because of the need to integrate segments of a route with less distinct sub-pieces, and because of increased need for monitoring one’s own mobility with low vision. We discuss each of these factors in the following sections.
First, spatial learning with FOV restricted to 10° may be significantly impaired in the museum space because the environment cannot be viewed all at once. In more typically studied hallway or stationary-viewing environments, viewing with peripheral field loss requires increased head rotation and integration of multiple views, which results in worse performance (Barhorst-Cates et al., 2016; Fortenbaugh et al., 2007; Yamamoto & Philbeck, 2013). We suggest that navigation in a museum further increases these visual demands because of the open nature of the environment, affecting the overall intelligibility, or mutual visibility (Hillier, 2006), of the space. In other research in complex, open environments, Hölscher, Brösamle, and Vrachliotis (2012) found that spaces with insufficient access to visual information across the space and unexpected navigational features, such as dead-ends or unexpected staircases, negatively impaired navigation ability. In a library wayfinding task, Li and Klippel (2016) also show that visual accessibility is a vital component of the “environmental legibility” of a space, arguing that visual information can facilitate spatial knowledge even in complex spaces. While the current study focused specifically on path-complexity spatial learning with restricted FOV, other features of the complex museum environment such as intelligibility likely also increase navigation demands in low vision. Quantifiable measures of visibility should be incorporated into future research and in the design of navigation aids for the visually impaired. It may also be that distances traveled were perceived incorrectly or inconsistently along different path segments due to the reduced FOV effects on perceived self-motion, as has been seen with reduced acuity and contrast sensitivity (Rand, Barhorst-Cates, Kiris, Thompson, & Creem-Regehr, 2018), contributing to overall error in memory for spatial layout. These vision-related components of navigation were not the primary aim of the current study, but should be tested in future research by analyzing visibility in test buildings, measuring head/eye movements, and testing distance perception.
In order to view art pieces in a museum, people take circuitous routes that incorporate many turns and lead viewers along short individual paths into purposeful dead-ends (Peponis et al., 2004), increasing demands for learning and memory. In this study we attempted to mimic natural museum behavior by including many turns that moved in and out of alcoves and around obstacles such as benches and art pieces in the center of the room. The paths in this experiment had many turns that were not all orthogonal, with several possible directions to move at any given time. This increased task difficulty by (1) providing less predictable route choices and (2) reducing the distinctiveness of decision points along the route. This may affect both the encoding and recall components of the task due to well-known decrements in cognitive map formation in complex environments (Byrne, 1979; Lynch, 1960; Moar & Bower, 1983; Tversky, 1981) as well as consequences arising from self-motion information through complex environments. While encoding the natural museum routes, a navigator may not be able to make accurate predictions of upcoming turns while learning, and turns are rarely orthogonal right or left turns. If a navigator used a strategy of remembering key turns along a route, having more turns to remember would inherently create more conflict in memory. Those non-orthogonal turns that are remembered might also be distorted in their representation in memory as either straight (Dalton, 2003) or orthogonal (Sadalla & Montello, 1989), as shown in prior work. In addition, while self-motion information from turns provides supplemental information for spatial learning with low vision in relatively simple environments, this information may not be as useful in the museum environment because of the need to associate target locations with one of many possible turns, as well as the increased error in path integration associated with those turns (Rieser & Rider, 1991). The body-based information from turns likely interacts with the lack of complete visual information in a way that amplifies spatial memory error. Zhao and Warren (2015b) discuss the reliability of different types of cues (visual and body-based) while navigating and argue that people shift between relying mostly on path integration (body-based cues) and mostly on visual or landmark cues depending on how reliable the visual cues are. In the case of severely restricted peripheral vision, visual cues may be less reliable or at least less detectable, which might encourage a shift to rely on body-based cues. In the museum context with many turns, this forced reliance on body-based cues may exacerbate spatial learning error due to memory conflict and accumulated path integration error. Taken together, the increased demands for learning and memory and the decreased reliability of body-based cues could explain the greater deficit in spatial memory under severely restricted viewing conditions in the museum compared to the structured hallway building.
Last, the path complexity in the environment even further increases the mobility monitoring demands that have been shown to detract from spatial learning (Rand et al., 2015). Mobility hazards could include turns, during which balance must be shifted and the body oriented in a new direction, obstacle avoidance - particularly avoidance of valuable pieces of art - and perceived changes in flooring that could affect balance or gait stability. Shorter stretches of walking could pose more threats to gait stability, as a walker would have less time to adjust and perform stable gait movements. In the museum environment, the greater number of turns that are less orthogonal in nature and the added difficulty of avoiding obstacles such as benches and centrally located art pieces all combine to create greater mobility monitoring demands compared to more structured buildings. Of note, participants were holding onto the arm of the experimenter throughout each path to minimize accidental touching of the art pieces, so in this case mobility monitoring demands were already minimized as compared to walking alone (see Rand et al., 2015). Even so, our RT dual-task measure showed increased cognitive load with the narrow compared to the wide FOV. It would be interesting in future research to examine the effects of FOV restriction on navigating with full mobility monitoring demands (i.e., without a guide) in a museum-like environment. In the current study, path complexity and target value were confounded (i.e., the museum paths all contained valuable, breakable art pieces and artifacts whereas in our prior study the targets were normal building items). It would be interesting in future research to test for the effect of valuable obstacles specifically, by testing within-subjects performance differences on similarly complex paths with non-breakable and highly valuable objects.
There are other factors that could contribute to the detriment we observed in the museum, such as building novelty or the presence of a greater number of interesting distractors. We asked participants to report their level of familiarity with the building on a scale from 1 to 7, and 87.5% of participants reported a score of 6–7, indicating little familiarity, similar to the 91.4% of participants who reported a score of 6–7 for familiarity with the building in Barhorst-Cates et al. (2016). While the potential distractors in the art museum were likely more visually appealing than those in the structured building in earlier experiments, both included the distractors that are present in a real-world university campus building in terms of people walking around, posters, advertisements, windows, and art pieces on the walls that might draw attention, etc. In an attempt to address the possible difference in the number of people present in the hallways between buildings, we purposefully scheduled the experiments at the museum to be on the days and times when large tours did not visit, so the number of people walking around in both buildings was roughly similar. Auditory distractions were also present in both cases, but markedly less so at the museum, making it the more optimal environment in that sense.
Limitations and future directions
Our results could be explained by various influences on learning - visual, motor, attentional - that we believe all combine to create the unique, demanding task of real-world navigation. However, we cannot claim that any of these three factors alone contribute to spatial memory error in navigation with restricted peripheral field. The relative contribution of each of these factors should be systematically studied in future research by, for instance, measuring head movements to examine visual encoding behaviors or manipulating paths with more or less turns to examine the impact of turn number specifically. Admittedly, testing navigation in a complex real-world space leaves open multiple possibilities for interpretation of the factors that influence impaired spatial learning with severely restricted peripheral field. In addition to differences in environmental complexity and path complexity as compared to our previous work, the museum context introduced additional mobility challenges with increased chance of collision with valuable art. In order to tease apart effects of environmental complexity versus path complexity, future research could design more or less complex paths in both visually simple and visually complex environments using systematic space syntax analyses to assess visual complexity. Future research could also manipulate the mobility demands of navigating and the contribution of body-based cues for movement in this context by having participants locomote in a wheelchair. While there are some limitations and challenges in experimental control that are inherent in naturalistic environments, our results provide initial insights into the interaction between environmental complexity and visual restrictions in spatial learning while navigating.
Our study was motivated by the navigational challenges faced by those with visual impairment and the overarching goal of our National Institutes of Health (NIH)-funded Designing Visually Accessible Spaces project to enhance visual accessibility - the use of vision for perception of spatial layout and safe and efficient travel through spaces. A limitation is that we used simulated restricted peripheral vision to control for the amount of vision loss in this experiment. Given the challenges in accurately controlling field of view in the real world and issues with stereo fusion if viewing with two eyes, we chose to allow restricted viewing in only the dominant eye. In the clinical low-vision setting, vision loss ranges widely from person to person and often includes combinations of field loss and severely degraded acuity and contrast sensitivity at varying parts of the field-of-view, and reliance on eye movements and other strategies developed to compensate for vision loss. As such, the extent to which these simulations accurately represent people with real-world low vision is unknown. However some prior research comparing both simulated and patients with clinical low vision has identified similar effects on spatial learning (Fortenbaugh et al., 2008; Legge, Granquist, et al., 2016). Studies by Fortenbaugh and colleagues showed that both simulated and real peripheral field loss led to an underestimation of remembered distance to target locations after walking a pre-determined route in a virtual environment, and that these errors increased with decreasing FOV. However, those with real peripheral field loss also showed some differences in eye movements and fixations. Studying navigation abilities of patients with clinical low vision with field loss in variable environments is an area in much need of future research.
Beyond the low-vision motivation, our results also contribute to more broad understanding of both the role of the peripheral visual field and demands on attention for spatial learning while navigating in indoor environments. While there have been a number of studies on the influence of restricted FOV on distance perception and mobility tasks (Creem-Regehr, Willemsen, Gooch, & Thompson, 2005; Fortenbaugh et al., 2007; Pelli, 1987; Wu, Ooi, & He, 2004), there is limited work on larger-scale navigation under restricted FOV conditions. Findings in this area are especially relevant to applications using augmented reality (AR) where graphical cues can augment real spaces and potentially facilitate navigation, but the FOV of displays still remains very restricted (e.g., the state-of-the-art Microsoft Hololens has a 30 × 17° FOV). Our findings suggest that the reduced FOV may be more detrimental with complex paths or risky navigation contexts. In addition, our results supporting effects of limited attentional resources on spatial learning can be generalized beyond the specific FOV manipulation to other contexts where mobility monitoring demands are high, such as with older adults (Barhorst-Cates et al., 2017; Schellenbach, Lövdén, Verrel, Krüger, & Lindenberger, 2010) or other visually impoverished environments. Finally, we hope to be able to use the current findings identifying challenges due to path complexity to inform the development of assistive devices that could compensate for increasing attentional demands. For example, the demands of integrating multiple views could be reduced by providing auditory information about visual context outside of the field of view, or additional multisensory cues could be provided to link physical turns with salient landmarks.