Through the Google Glass: The impact of heads-up displays on visual attention
© The Author(s) 2016
Received: 23 March 2016
Accepted: 30 August 2016
Published: 5 November 2016
In five experiments, we evaluated how secondary information presented on a heads-up display (HUD) impacts performance of a concurrent visual attention task. To do so, we had participants complete a primary visual search task under a variety of secondary load conditions (a single word presented on Google Glass during each search trial). Processing of secondary information was measured through a recognition memory task. Other manipulations included relevance (Experiments 1–4) and temporal onset of secondary information relative to the primary task (Experiment 3). Secondary information was always disruptive to the visual search, regardless of temporal onset and even when participants were instructed to ignore it. These patterns were evident in search tasks reflective of both selective (Experiments 1–3) and preattentive (Experiment 4) attentional mechanisms, and were not a result of onset-offset attentional capture (Experiment 5). Recognition memory for secondary information was always above chance. Our findings suggest that HUD-based visual information is profoundly disruptive to attentional processes and largely immune to user-centric prioritization.
In five experiments, we break new empirical ground by characterizing dual-task impairments associated with secondary information presented on a heads-up display (HUD) (i.e., Google Glass) during a primary visual search task. By combining two classical cognitive psychology paradigms (visual search and recognition memory), our studies dissociate impairments to selective and preattentive mechanisms while quantifying the extent to which secondary information is processed in a context increasingly encountered in the real world (e.g., information projected on a windshield when driving). Our results indicate that secondary HUD-based information is ubiquitously disruptive to attentional mechanisms, independent of user-centric prioritization and the time course of secondary information.
Our lives are being continuously and increasingly intermingled with technology (e.g., smartphones, wearable HUDs). While creating informationally rich environments might lead to productivity benefits in some contexts and convenience in others, designers, scientists, and users need to understand how technological integration might also be harmful. We investigate this latter context in our present research, which contains a unique blend of theoretically relevant and practically applicable data that should be of interest to a wide audience, including psychologists, engineers, designers, policy makers, and the general public.
Mobile technology has become essential and pervasive in the everyday lives of many people. Understanding the extent to which increasingly integrated information systems, such as cell phones (Drews, Yazdani, Godfrey, Cooper, & Strayer, 2009; Strayer, Drews, & Johnston, 2003) and other user interfaces, impact human performance on a range of common tasks and cognitive processes is of critical importance. Specifically, how does the adoption of various technologies remove a user from the present moment or task at hand, and at what cost (Starner, 2002)? Mobile technologies, for instance, have progressed from cell phones to wearable interfaces, leaving users in constant contact with their devices, regardless of whether they explicitly choose to engage with that device.
It has been well established that engaging in multitasking induces costs to performance (Allport, 1980; Horrey & Wickens, 2006; Neider, McCarley, Crowell, Kaczmarski, & Kramer, 2010; Strayer et al., 2003). In the practical domain, much of this research is focused on cell phone engagement in the context of driving or walking (Horrey & Wickens, 2006; Kramer, Hahn, Irwin, & Theeuwes, 1999; Neider et al., 2010). For example, using a cell phone or text-to-speech interface while driving significantly increases cognitive load and crash risk (Drews et al., 2009; Strayer et al., 2013), and it impairs memory for visual information (Strayer et al., 2003). While a focus on cell phone-related distraction has made practical sense, given the approximately 7.1 billion mobile subscriptions internationally (International Telecommunication Union ITU, 2015), emergent technologies are moving toward a user-integrated approach favoring HUDs. HUDs have long been used in aviation cockpits and are now being employed in everyday environments, such as automobiles (e.g. Cadillac and Mercedes vehicles), or integrated directly with the user, such as with Google Glass (GG) and Oculus Rift (Ceurstemont, 2014). Unlike cell phones, HUDs typically present users with a persistent stream of visual information (though systems such as GG can provide auditory information as well), increasing the likelihood of interference with other concurrent visual tasks (Wickens, 2002, 2008). Although prior work in the multitasking domain is largely ubiquitous in demonstrating performance impairments under such conditions across a variety of contexts, novel reappropriations of existing technologies can carry with them some implicit expectation that they might immunize against such impairments. HUDs, which make use of transparent displays, have been used with great success in the aviation domain; however, the information-processing needs and priorities of a pilot at 30,000 feet are likely to be very different from those of a driver on the ground who might have only seconds to respond to a potential hazard. Consequently, as HUDs become increasingly used in less specialized contexts, it becomes imperative to understand how they might impact overall behavior when set against attentional limitations. To date, the literature relating HUD-based technology to attention and performance costs in everyday contexts has been minimal (Starner, 2002; Wolffsohn, McBrien, Edgar, & Stout, 1998).
Our goals in the present experiments were twofold. First, we wanted to characterize the extent to which visual information presented on a user-worn HUD (e.g., GG) impacts performance on a primary visual task, and how such effects might be modulated by the relevance and temporal presentation (i.e., onset prior to, concurrently, or following onset of primary task) of the HUD-based information. Second, we wanted to shed light on possible attentional mechanisms underlying performance costs arising from information presented on HUDs while engaged in a concurrent primary task (analogous to conversing on a cell phone while driving). To do so, we employed a visual search paradigm as our primary task, allowing us to isolate impairments to both parallel and serial attention mechanisms. Whereas efficient search for singleton targets is thought to involve parallel, preattentive processes (and less so selective attention), searches that are inefficient are thought to require serial attention processes that rely heavily on selective attention (Wolfe, 1998). Critically, if performance impairments occurred only during inefficient search, it would suggest that secondary task information presented on the GG is largely detrimental to selective attentional processes, perhaps those related to efficiently guiding attention toward the target. Alternatively, if secondary information presented on the GG induces performance costs during singleton search, it would suggest impairment to preattentive processes as well (though it would not rule out some impairment to selective attention mechanisms), and more generally to broader visual processing. An additional benefit of using a search task is that search is a vital operation for everyday function; humans must constantly locate task-relevant information (such as a pedestrian about to run into a roadway) in the environment. Thus, visual search is both a theoretically useful and practically relevant paradigm to assess HUD-based dual-task effects.
In all experiments, the participant’s primary task was to locate a T target among L distractors displayed on a computer screen. In some conditions, the secondary information, in the form of a single word, was concurrently presented on a GG that was worn during a portion of the experiment. In Experiment 1, we characterized primary task performance costs associated with the presentation of secondary information on the GG while also manipulating the perceived relevance of the secondary information (through instructions) to the participant. We predicted response time (RT) costs to the visual search task in the presence of a secondary information stream, as well as an added cost when participants were told the information was useful. The extent to which secondary task information was processed was assessed through a surprise recognition memory task administered after all search trials were completed. In Experiment 2, we manipulated the context of the secondary information presented on the GG by informing participants of the recognition memory task. We expected secondary information to be more disruptive to the primary task when participants were aware that they would be tested on it. In Experiment 3, we explored the degree to which variation in the time course of the onset of secondary information impacted primary task performance (prior to, concurrently, or following the primary task), and the extent to which this might interact with the perceived relevance of that information. We expected concurrent presentation to produce larger costs to primary task performance, with this cost increasing when the secondary task was perceived as more relevant. In Experiment 4, we manipulated the saliency of the target T to elicit singleton search behavior to evaluate whether performance costs are exclusive to selective attention mechanisms or exist for preattentative processes as well. In the final experiment, we masked the onset and offset of the secondary task information to guard against the possibility that our effects might be more closely related to some reflexive reorienting of attentional processes toward an abrupt stimulus onset, as opposed to informational processing impairments associated with managing dual-task demands.
Ninety participants from the University of Central Florida’s undergraduate research pool participated (56 females, M age = 19.58) for course credit. Eighteen participants were assigned to each experimental condition, based on previous research (Neider et al., 2010). We controlled for noise or experimental errors by replacing any participant who was run in a noisy environment or incorrectly with another subject using the same condition. All participants had normal or corrected-to-normal visual acuity and normal color vision. Consent was obtained prior to screening and experimentation, as per the Declaration of Helsinki. This research was approved by University of Central Florida’s Institutional Review Board (IRB Number SBE-14-10257). The total experiment took about 1 h to complete.
There were five GG conditions associated with secondary task load. To create a baseline and a control for any visual occlusion that might occur when wearing the GG, we included two conditions where no secondary information was presented. These no-load conditions had participants performing the search task without wearing the GG (control) or while wearing the GG with no information presented on it (glass only). The other three GG conditions were similar, except that secondary information (a single word) was presented on the GG for 2000 milliseconds while the participant concurrently performed the search task (dual-task conditions). In conditions where secondary information was presented on the GG, participants were instructed that (1) they should ignore the information on the GG, (2) the information on the GG was irrelevant, or (3) the information on the GG might be useful for the primary task. Regardless of instruction, secondary information was never meaningful for the primary task. The words appeared simultaneously with the onset of the search display onset. The GG screen display size was approximately 2.5 degrees of visual angle. Words for the secondary task were randomly selected from the MRC Psycholinguistic Database (Coltheart, 1981), based on the parameters of length (4–7 letters), syllables (3 or fewer), and frequency in the English language (frequency range of 15–100).
A surprise recognition memory task was administered following the completion of the primary experimental task to determine the extent to which secondary words were processed in the dual-task conditions (Jones, Jacoby, & Gellis, 2001). Previously seen words and new words (60 of each), which were sampled using the aforementioned parameters, were interspersed, and participants were asked to respond whether the word was presented when they performed the main experimental task. Each word was presented for 1500 milliseconds, followed by six asterisks to cue a response. During 16 practice trials, participants received feedback regarding the accuracy of their responses. Throughout the recognition task, if the participants failed to respond during the cue display, they received feedback that they had not responded. Nonresponses were counted as errors during analysis.
Design and procedure
In experiment 1, we employed a mixed factorial design, with set size (50 or 80 search items) as a within-subject factor and GG conditions (control, glass, and dual-task conditions) as a between-subjects factor. Participants completed 30 practice trials of the search task without the secondary task. Following a brief break, each participant received instructions (between subjects) regarding the relevancy of the information presented on the GG. Participants in all conditions except the control were instructed to place the GG on so that the screen was visible and aligned with the top edge of the computer monitor. Following the completion of the visual search task, participants were instructed to return the GG to the researchers and read instructions regarding the word recognition memory task. They completed 16 practice trials before beginning the recognition memory task.
Results and discussion
Visual search accuracy for Experiments 1–5
F(4, 77) = 0.65, p = .627
F(3, 61) = 2.15, p = .103
F(3, 61) = 0.57, p = .636
F(3, 58) = 1.65, p = .187
F(2, 53) = 0.04, p = .960
Search response times for Experiments 1–5
Mean RT (milliseconds)
Generally speaking, presenting information on the GG concurrently with the search task induced costs to RT performance. However, participants were unaware that they would be tested on the secondary information presented on the GG, which may have disproportionately biased them toward discounting that information. To address this possibility, in Experiment 2 we informed participants of the memory recognition test.
Seventy-two naive participants (52 females, M age = 19.89) were recruited for Experiment 2. All experimental details were identical to those in Experiment 1, except for the following three changes: (1) We eliminated the dual-task ignore condition because of the similarity to the dual-task irrelevant condition; (2) we changed all of the dual-task instructions to manipulate the secondary task relationship to the entire experiment as opposed to just the visual search task; and (3) we informed the participants of the recognition memory task.
Seven participants were excluded from the analyses because of accuracy or RT values more than 2 SD from the mean. Overall, the data were similar to those in Experiment 1. We found no differences for accuracy across conditions (see Table 1). When secondary information was presented, participants took longer to perform the primary search task (F[3, 61] = 4.16, p = .010, η2 = .170) and dual-task conditions. RTs were significantly different from the control conditions (p s < .05). Interestingly, search RTs were longer when purportedly useful secondary information was presented on the GG (p < .05) (see Fig. 2b). We also found an effect of set size (F[1, 61] = 71.73, p < .001, η2 = .540), but no interaction between set size and GG condition (F[3, 61] = 0.43, p = .731, η2 = .021). RT × set size function slopes averaged 22.23 milliseconds per item. Memory performance was significantly above chance (all p s < .05) in the recognition task, with no difference between conditions (F[1, 29] = 0.00, p = .952, η2 = .004).
The data patterns derived from the first two experiments are both surprising and alarming. Participants were unable to filter out secondary information presented on the GG. More practically, our data strongly suggest that observers cannot completely inhibit secondary information presented on a HUD, even when they want to or are instructed to do so. Perhaps equally concerning, when participants in Experiment 2 were biased to attend to HUD-based information (i.e., instructed the information might be useful), RTs increased by about 86 %. A real-world analogue would be an individual receiving a text message or visual route information on a HUD while driving and choosing to allocate attention to this secondary information at the expense of performance on the primary task.
Our data derived from Experiments 1 and 2 clearly suggest that secondary information presented on a HUD elicits RT costs to concurrent tasks involving visual attention; however, the data are limited to cases where the task information is time-locked to onset concurrently. In the real world, information ebbs and flows. Distracting information often is received when an observer is already engaged in another task (e.g., text messages received while driving a vehicle). As such, in Experiment 3, we manipulated the timing of the onset of the secondary information.
Seventy-two new participants were recruited explicitly for Experiment 3 (45 females, M age = 18.71). To characterize the extent to which selective attention mechanisms are impaired when information is not time-locked, in Experiment 3 we manipulated the timing of the onset of the HUD-based secondary information (−500, −250, 0, 250, and 500 milliseconds relative to primary visual search task onset). For Experiment 3, we used a mixed factorial design with set size (50 and 80 items) and secondary information onset time as within-subject factors and GG condition (control, glass only, and GG conditions) as a between-subjects factor. All other experimental details were identical to those in Experiment 2.
Combined, these data suggest that, generally speaking, secondary information induced a cost to the primary visual search task regardless of when it appeared, and they underline how generally distracting HUD-based information may be during multitasking. Again, recognition memory performance was above chance (p s < .05), regardless of dual-task condition (F[1, 30] = 0.04, p = .844, η2 = .001).
The results from Experiments 1–3 clearly demonstrate that secondary visual information presented on a HUD interferes with the processing and completion of a concurrent visual task requiring selective attention. It is unclear, however, whether selective attention, which is thought to be serial in nature, represents the bottleneck through which dual-task effects might induce broader performance costs.
In our previous experiments, we used a visual search paradigm where the target was difficult to discern from the distractors. In Experiment 4, we altered our primary search task by making the target object red, effectively creating a singleton search task. Importantly, singleton search relies on parallel preattentive mechanisms, as opposed to selective attention (Treisman & Gelade, 1980; Wolfe, 2010). Our goal in Experiment 4 was to evaluate whether the costs associated with irrelevant information presented on the GG are exclusive to tasks in which selective attention mechanisms are required.
Seventy-two naive participants were recruited for Experiment 4 (47 females, M age = 18.83). All methods were identical to Experiment 2, with one exception. Specifically, we adjusted the color of the target T to red (RGB 237-0-0) to increase saliency and elicit singleton search behavior.
Ten participants were removed from analyses because of accuracy or RT values more than 2 SD from the mean. We found no differences for accuracy across conditions (see Table 1). RT × set size functions were consistent with patterns reflective of singleton search (average slope of 1.57 milliseconds per item) (Wolfe, 1998). Patterns of RT costs were also similar to those in our previous experiments. There were significant main effects of GG condition (F[3, 58] = 9.04, p < .001, η2 = .319) (see Fig. 2d) and set size (F[1, 58] = 13.32, p = .001, η2 = .187), but no interaction between condition and set size (F[3, 58] = 0.63, p = .598, η2 = .032). We found that the dual-task conditions had slower RTs than the control conditions (p < .05). Consistent with Experiments 1–3, performance in the recognition task remained above chance (p s < .05) and did not differ across GG conditions (F[1, 29] = 0.02, p = .883, η2 = .001).
These data indicate that interference associated with visual HUD-based distraction is broad, affecting not only selective attention mechanisms but also processes associated with the perceptual extraction of visual features.
In Experiments 1–4, the screen on the GG remained blank until a word was presented. As a result, word presentations on the GG could be characterized as abrupt onsets. A large body of literature has shown that such onsets are particularly effective at capturing attentional processes and might be reflexive in nature (Chua, 2013; Folk & Remington, 2015; Theeuwes, Kramer, Hahn, Irwin, & Zelinsky, 1999; Yantis & Jonides, 1984). Given these findings, it is possible that the dual-task costs observed up until this point may not be associated with some limitation in multitasking ability, but rather arose solely from the sudden onset of the secondary stimulus. To test this possibility, in Experiment 5 we presented a persistent visual mask on the GG that was replaced by a word at the onset of the primary visual search task. Finding a pattern of data consistent with Experiments 1–4 would support the assertion that dual-task performance costs associated with HUD-based information are best characterized within the context of basic attentional limitations.
Fifty-seven naive participants were recruited for Experiment 5 (26 females, M age = 20.11, 19 in each condition). All methods were similar to those in Experiment 2, except that whenever the word was absent from the GG, we presented a visual mask equal in length to the maximum length of the secondary task words (e.g. “#######”). Additionally, given that we found no differences in our previous studies between our two control conditions (i.e., no glass and glass with no words), we included only the no glass control condition.
Despite the use of a mask to attenuate the abrupt onset of the HUD-based information, the data derived from Experiment 5 were consistent with the patterns observed in Experiments 1–4; participants took more time to complete the primary search task when a secondary stimulus was presented on the GG.
Overall, our data show that there is a cost associated with wearable technology in dual-task contexts that approximate situations often encountered in the real world. What’s more, this effect is robust and, at the very least, difficult to mitigate. In Experiment 1, we found evidence of a dual-task cost when wearing the GG and that that cost was not offset by relevance instructions pertaining to the secondary information; costs persisted even when participants were instructed to ignore the secondary information. In Experiment 2, we found that when participants were informed that they would be tested on the secondary information, performance costs were even more robust. In Experiment 3, we showed that RT costs associated with the HUD-based secondary information were largely orthogonal to the temporal onset of that information in relation to the primary task; secondary information was nearly always disruptive to visual search, regardless of time of onset. Experiment 4 indicated that the costs of secondary HUD-based information are not only incurred to selective attention mechanisms, but are in fact present at early processing stages thought to be associated with preattentive mechanisms. Finally, in Experiment 5, we tested the possibility that the patterns of data observed in Experiments 1–4 may have been associated with the abrupt onset of the HUD-based information and found that the pattern persisted when the abrupt onsets were eliminated.
Our results provide robust evidence that primary task performance is impaired by secondary information presented on a wearable HUD and is relatively independent of task relevance. Although there was some evidence that participants weighed secondary information portrayed as relevant to the primary task in Experiment 2 more heavily than information portrayed as irrelevant, and in turn had larger overall performance costs, this finding was not replicated in all experiments. Generally speaking, information pertinence may not matter when set against broader distraction, as previous researchers have found that items relevant to safety were not recognized any more often than irrelevant items in either single- or dual-task scenarios (Strayer & Drews, 2007). That these costs exist in a simplified environment is particularly worrisome when speculating about how they might generalize to more realistic multitasking situations (Horrey & Wickens, 2006). Even under relatively simple task conditions, performance decrements were substantial, at ranges of 450–600 milliseconds compared with control conditions. Given the practicality and growing practice of implementing HUDs for a wider variety of users (beyond those in aviation), these costs should give researchers and practitioners pause (Crawford & Neal, 2006; Liu et al., 2009). It is not unreasonable to speculate that these costs might be more severe under increasingly complex, realistic task conditions (e.g., when driving) (Strayer et al., 2003). In simulated environments, GG has produced impairments similar to those present when using a cellular device; however, the performance decrements are less severe (He, Choi, McCarley, Chaparro, & Wang, 2015; Sawyer, Finomore, Calvo, & Hancock, 2014).
Importantly, in our studies, performance impairments were present regardless of whether the primary task depended on preattentive parallel processes or serial attention, suggesting that costs under real-world conditions are likely to occur across a broad array of tasks and conditions. Whereas previous findings have demonstrated impairments in perceptual memory under dual-task conditions (Strayer et al., 2003), our data suggest broad-spectrum impairments to attentional processes as well. Our findings are consistent with those derived from previous theoretical models suggesting that cross-task interference is likely to be high when competing information is presented within the same perceptual modality (Wickens, 2008). However, the cognitive mechanisms underlying the interference are often left unspecified. Strayer and Drews (2007) proposed that the underlying interference accompanying technology-based distraction is likely associated with inattentional blindness; secondary information impedes the encoding of primary task information. Our finding that secondary information can impede processes associated with both inefficient and efficient search suggests that dual-task performance impairments may actually arise quite early in the information-processing chain and impact the selection of which low-level information in the environment is passed on to higher-order processes for scrutiny. This explanation is not inconsistent with the proposal of Strayer and Drews. Rather, it provides some broader perspective on where their inattentional blindness findings may emerge from: broad impairments to the deployment of attentional processes. Still, it is worth noting that while we found evidence for impairments to both parallel and serial attentional mechanisms through our experimental manipulations, within our studies there was no interaction of GG condition with set size in the primary visual search task, which one might expect to observe in the presence of selective attention impairments (though this might also be reflective of some sort of decision-making process as opposed to selective attention alone). This might suggest that the locus of dual-task impairments, at least as it pertains to our particular set of tasks, is more complex than can be described by attentional impairments alone. In future work, researchers should continue to explore the phenomena at the mechanistic level.
Overall, performance costs in our studies occurred regardless of perceived importance of secondary information (participants were unable to ignore secondary information even when instructed to do so) and time course of information presentation. Combined, our data strongly suggest that caution should be exercised when deploying HUD-based informational displays in circumstances where the primary user task is visual in nature. Just because we can, does not mean we should.
We thank Alexander Lemkin for his help in developing the Google Glass Java application used in this project. This research was supported in part by a National Science Foundation graduate research fellowship (to JEL).
JEL and MBN conceived of the study, designed the experiments, interpreted the data, and wrote the manuscript. JEL programmed and executed the experiments, and analyzed the data. Both authors read and approved the final manuscript.
The authors declare that they have no competing interests.
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
- Allport, A. D. (1980). Attention and performance. Cognitive Psychology: New Directions, 1, 112–153.Google Scholar
- Ceurstemont, S. (2014). The virtual, in reality. New Scientist, 221(2950), 17.View ArticleGoogle Scholar
- Chua, F. K. (2013). Attentional capture by onsets and offsets. Visual Cognition, 21(5), 569–598. doi:10.1080/13506285.2013.812700 View ArticleGoogle Scholar
- Coltheart, M. (1981). The MRC psycholinguistic database. Quarterly Journal of Experimental Psychology: Section A, Human Experimental Psychology, 33(4), 497–505. doi:10.1080/14640748108400805 View ArticleGoogle Scholar
- Crawford, J., & Neal, A. (2006). A review of the perceptual and cognitive issues associated with the use of head-up displays in commercial aviation. International Journal of Aviation Psychology, 16(1), 1–19. doi:10.1207/s15327108ijap1601_1 View ArticleGoogle Scholar
- Drews, F. A., Yazdani, H., Godfrey, C. N., Cooper, J. M., & Strayer, D. L. (2009). Text messaging during simulated driving. Human Factors, 51(5), 762–770. doi:10.1177/0018720809353319 View ArticlePubMedGoogle Scholar
- Folk, C. L., & Remington, R. W. (2015). Unexpected abrupt onsets can override a top-down set for color. Journal of Experimental Psychology: Human Perception and Performance, 41(4), 1153–1165. doi:10.1037/xhp0000084 PubMedGoogle Scholar
- He, J., Choi, W., McCarley, J. S., Chaparro, B. S., & Wang, C. (2015). Texting while driving using Google GlassTM: Promising but not distraction-free. Accident Analysis and Prevention, 81, 218–229. doi:10.1016/j.aap.2015.03.033 View ArticlePubMedGoogle Scholar
- Horrey, W. J., & Wickens, C. D. (2006). Examining the impact of cell phone conversations on driving using meta-analytic techniques. Human Factors, 48(1), 196–205. doi:10.1518/001872006776412135 View ArticlePubMedGoogle Scholar
- International Telecommunication Union (ITU). (2015). ICT facts and figures: The world in 2015. Geneva, Switzerland: ITU.Google Scholar
- Jones, T. C., Jacoby, L. L., & Gellis, L. A. (2001). Cross-modal feature and conjunction errors in recognition memory. Journal of Memory and Language, 44(1), 131–152. doi:10.1006/jmla.2001.2713 View ArticleGoogle Scholar
- Kramer, A. F., Hahn, S., Irwin, D. E., & Theeuwes, J. (1999). Attentional capture and aging: Implications for visual search performance and oculomotor control. Psychology and Aging, 14(1), 135–154. doi:10.1037/0882-79188.8.131.52 View ArticlePubMedGoogle Scholar
- Liu, D., Jenkins, S. A., Sanderson, P. M., Watson, M. O., Leane, T., Kruys, A., & Russell, W. J. (2009). Monitoring with head-mounted displays: Performance and safety in a full-scale simulator and part-task trainer. Anesthesia and Analgesia, 109(4), 1135–1146. doi:10.1213/ANE.0b013e3181b5a200 View ArticlePubMedGoogle Scholar
- Neider, M. B., McCarley, J. S., Crowell, J. A., Kaczmarski, H., & Kramer, A. F. (2010). Pedestrians, vehicles, and cell phones. Accident Analysis and Prevention, 42(2), 589–594. doi:10.1016/j.aap.2009.10.004 View ArticlePubMedGoogle Scholar
- Sawyer, B. D., Finomore, V. S., Calvo, A. A., & Hancock, P. A. (2014). Google Glass: A driver distraction cause or cure? Human Factors, 56(7), 1307–1321. doi:10.1177/0018720814555723 View ArticleGoogle Scholar
- Starner, T. E. (2002). Attention, memory, and wearable interfaces. IEEE Pervasive Computing, 1(4), 88–91. doi:10.1109/MPRV.2002.1158283 View ArticleGoogle Scholar
- Strayer, D. L., Cooper, J. M., Turrill, J., Coleman, J., Medeiros-Ward, N., & Biondi, F. (2013). Measuring cognitive distraction in the automobile (Technical report). Washington, DC: AAA Foundation for Traffic Safety. Retrieved from https://www.aaafoundation.org/sites/default/files/MeasuringCognitiveDistractions.pdf.Google Scholar
- Strayer, D. L., Drews, F. A., & Johnston, W. A. (2003). Cell phone-induced failures of visual attention during simulated driving. Journal of Experimental Psychology: Applied, 9(1), 23–32. doi:10.1037/1076-898X.9.1.23 PubMedGoogle Scholar
- Strayer, D. L., & Drews, F. A. (2007). Cell-Phone—Induced Driver Distraction. Current Directions in Psychological Science, 16(3), 128–131. http://doi.org/10.1111/j.1467-8721.2007.00489.x.View ArticleGoogle Scholar
- Theeuwes, J., Kramer, A. F., Hahn, S., Irwin, D. E., & Zelinsky, G. J. (1999). Influence of attentional capture on oculomotor control. Journal of Experimental Psychology: Human Perception and Performance, 25(6), 1595–1608. doi:10.1037/0096-15184.108.40.2065 PubMedGoogle Scholar
- Treisman, A. M., & Gelade, G. (1980). A feature-integration theory of attention. Cognitive Psychology, 12(1), 97–136. doi:10.1016/0010-0285(80)90005-5 View ArticlePubMedGoogle Scholar
- Wickens, C. D. (2002). Multiple resources and performance prediction. Theoretical Issues in Ergonomics Science, 3(2), 159–177. doi:10.1080/14639220210123806 View ArticleGoogle Scholar
- Wickens, C. D. (2008). Multiple resources and mental workload. 50(3), 449–455. doi:10.1518/001872008X288394
- Wolfe, J. M. (1998). Visual search. In H. Pasher (Ed.), Attention (pp. 13–56). Hove, UK: Psychology Press.Google Scholar
- Wolfe, J. M. (2010). Guided Search 4.0: A guided search model that does not require memory for rejected distractors. Journal of Vision, 1(3), 349–349. doi:10.1167/1.3.349 View ArticleGoogle Scholar
- Wolffsohn, J. S., McBrien, N. A., Edgar, G. K., & Stout, T. (1998). The influence of cognition and age on accommodation, detection rate and response times when using a car head-up display (HUD). Ophthalmic and Physiological Optics, 18(3), 243–253. doi:10.1016/S0275-5408(97)00094-X View ArticlePubMedGoogle Scholar
- Yantis, S., & Jonides, J. (1984). Abrupt visual onsets and selective attention: Evidence from visual search. Journal of Experimental Psychology: Human Perception and Performance, 10(5), 601–621. doi:10.1037/0096-15220.127.116.111 PubMedGoogle Scholar