Our lives are being continuously and increasingly intermingled with technology (e.g., smartphones, wearable HUDs). While creating informationally rich environments might lead to productivity benefits in some contexts and convenience in others, designers, scientists, and users need to understand how technological integration might also be harmful. We investigate this latter context in our present research, which contains a unique blend of theoretically relevant and practically applicable data that should be of interest to a wide audience, including psychologists, engineers, designers, policy makers, and the general public.
Mobile technology has become essential and pervasive in the everyday lives of many people. Understanding the extent to which increasingly integrated information systems, such as cell phones (Drews, Yazdani, Godfrey, Cooper, & Strayer, 2009; Strayer, Drews, & Johnston, 2003) and other user interfaces, impact human performance on a range of common tasks and cognitive processes is of critical importance. Specifically, how does the adoption of various technologies remove a user from the present moment or task at hand, and at what cost (Starner, 2002)? Mobile technologies, for instance, have progressed from cell phones to wearable interfaces, leaving users in constant contact with their devices, regardless of whether they explicitly choose to engage with that device.
It has been well established that engaging in multitasking induces costs to performance (Allport, 1980; Horrey & Wickens, 2006; Neider, McCarley, Crowell, Kaczmarski, & Kramer, 2010; Strayer et al., 2003). In the practical domain, much of this research is focused on cell phone engagement in the context of driving or walking (Horrey & Wickens, 2006; Kramer, Hahn, Irwin, & Theeuwes, 1999; Neider et al., 2010). For example, using a cell phone or text-to-speech interface while driving significantly increases cognitive load and crash risk (Drews et al., 2009; Strayer et al., 2013), and it impairs memory for visual information (Strayer et al., 2003). While a focus on cell phone-related distraction has made practical sense, given the approximately 7.1 billion mobile subscriptions internationally (International Telecommunication Union ITU, 2015), emergent technologies are moving toward a user-integrated approach favoring HUDs. HUDs have long been used in aviation cockpits and are now being employed in everyday environments, such as automobiles (e.g. Cadillac and Mercedes vehicles), or integrated directly with the user, such as with Google Glass (GG) and Oculus Rift (Ceurstemont, 2014). Unlike cell phones, HUDs typically present users with a persistent stream of visual information (though systems such as GG can provide auditory information as well), increasing the likelihood of interference with other concurrent visual tasks (Wickens, 2002, 2008). Although prior work in the multitasking domain is largely ubiquitous in demonstrating performance impairments under such conditions across a variety of contexts, novel reappropriations of existing technologies can carry with them some implicit expectation that they might immunize against such impairments. HUDs, which make use of transparent displays, have been used with great success in the aviation domain; however, the information-processing needs and priorities of a pilot at 30,000 feet are likely to be very different from those of a driver on the ground who might have only seconds to respond to a potential hazard. Consequently, as HUDs become increasingly used in less specialized contexts, it becomes imperative to understand how they might impact overall behavior when set against attentional limitations. To date, the literature relating HUD-based technology to attention and performance costs in everyday contexts has been minimal (Starner, 2002; Wolffsohn, McBrien, Edgar, & Stout, 1998).
Our goals in the present experiments were twofold. First, we wanted to characterize the extent to which visual information presented on a user-worn HUD (e.g., GG) impacts performance on a primary visual task, and how such effects might be modulated by the relevance and temporal presentation (i.e., onset prior to, concurrently, or following onset of primary task) of the HUD-based information. Second, we wanted to shed light on possible attentional mechanisms underlying performance costs arising from information presented on HUDs while engaged in a concurrent primary task (analogous to conversing on a cell phone while driving). To do so, we employed a visual search paradigm as our primary task, allowing us to isolate impairments to both parallel and serial attention mechanisms. Whereas efficient search for singleton targets is thought to involve parallel, preattentive processes (and less so selective attention), searches that are inefficient are thought to require serial attention processes that rely heavily on selective attention (Wolfe, 1998). Critically, if performance impairments occurred only during inefficient search, it would suggest that secondary task information presented on the GG is largely detrimental to selective attentional processes, perhaps those related to efficiently guiding attention toward the target. Alternatively, if secondary information presented on the GG induces performance costs during singleton search, it would suggest impairment to preattentive processes as well (though it would not rule out some impairment to selective attention mechanisms), and more generally to broader visual processing. An additional benefit of using a search task is that search is a vital operation for everyday function; humans must constantly locate task-relevant information (such as a pedestrian about to run into a roadway) in the environment. Thus, visual search is both a theoretically useful and practically relevant paradigm to assess HUD-based dual-task effects.
In all experiments, the participant’s primary task was to locate a T target among L distractors displayed on a computer screen. In some conditions, the secondary information, in the form of a single word, was concurrently presented on a GG that was worn during a portion of the experiment. In Experiment 1, we characterized primary task performance costs associated with the presentation of secondary information on the GG while also manipulating the perceived relevance of the secondary information (through instructions) to the participant. We predicted response time (RT) costs to the visual search task in the presence of a secondary information stream, as well as an added cost when participants were told the information was useful. The extent to which secondary task information was processed was assessed through a surprise recognition memory task administered after all search trials were completed. In Experiment 2, we manipulated the context of the secondary information presented on the GG by informing participants of the recognition memory task. We expected secondary information to be more disruptive to the primary task when participants were aware that they would be tested on it. In Experiment 3, we explored the degree to which variation in the time course of the onset of secondary information impacted primary task performance (prior to, concurrently, or following the primary task), and the extent to which this might interact with the perceived relevance of that information. We expected concurrent presentation to produce larger costs to primary task performance, with this cost increasing when the secondary task was perceived as more relevant. In Experiment 4, we manipulated the saliency of the target T to elicit singleton search behavior to evaluate whether performance costs are exclusive to selective attention mechanisms or exist for preattentative processes as well. In the final experiment, we masked the onset and offset of the secondary task information to guard against the possibility that our effects might be more closely related to some reflexive reorienting of attentional processes toward an abrupt stimulus onset, as opposed to informational processing impairments associated with managing dual-task demands.