Operators in high-stress domains often need to divide attention between the central and peripheral visual fields. A pilot, for example, must also monitor for cockpit alerts while maintaining awareness of an aircraft’s position in space (Wickens, Sebok, McCormick, & Walters, 2016), and operators in air traffic control must remain responsive to critical alerts while managing the flow of air traffic (Imbert et al., 2014). Similarly, the increasing use of head-worn displays in professional roles means that many operators are required to switch attention between tasks within their central visual field and peripheral events projected onto the headset (Pascale et al., 2015). Within each of these domains, performing effectively means processing information presented centrally, while also discriminating between critical and non-critical “noise” events in the visual periphery. For system designers, this issue implies a need to understand the task and display characteristics that maximize peripheral detection and discrimination under conditions of high central load.
An obvious technique to improve target detection is to increase target salience, the feature contrast between the target and its surroundings (Itti & Koch, 2000; Theeuwes, 2010). Unfortunately, visual heterogeneity reduces feature contrast (Humphreys, Quinlan, & Riddoch, 1989; Nothdurft, 1992), and in a cluttered, dynamic environment like the cockpit, even events designed to be highly salient can go undetected (Nikolic, Orr, & Sarter, 2004; Steelman, McCarley, & Wickens, 2013). Alternative strategies for ensuring rapid target detection are, therefore, useful. One converging strategy is to present targets redundantly, that is, on multiple channels simultaneously. Redundant presentation generally speeds target detection (Miller, 1982; Todd, 1912), and is endorsed in human factors engineering as a method of promoting information security (Wickens & Hollands, 2000; Wickens, Prinet, Hutchins, Sarter, & Sebok, 2011). For example, vehicle collision warning systems often employ redundant visual or auditory signals to alert a driver of a potential collision (Ho, Reed, & Spence, 2007). Similarly, in aircraft settings, pilots respond faster to missile approach warnings as the number of informational channels delivering the warning increases (Selcon, Taylor, & McKenna, 1995).
Like a manipulation of salience, however, redundant information display is not guaranteed to aid performance. Constraints on processing resources can modulate the efficiency with which concurrent events are processed (Townsend & Eidels, 2011), limiting the benefits produced by a redundant target (e.g., Eidels, Townsend, Hughes, & Perry, 2014; McCarley, Mounts, & Kramer, 2007; Townsend & Nozawa, 1995). Moreover, under some conditions, the addition of the second target may produce no redundancy gain at all (Grice, Canham, & Gwynne, 1984). More surprisingly, within a multi-task environment redundant signals may actually be disruptive: Wickens and colleagues (Seagull, Wickens, & Loeb, 2001; Wickens & Gosney, 2003) have reported evidence that redundant audio-visual target presentation in a monitoring task can disrupt performance in an ongoing tracking task. These results suggest that the demands of encoding or recognizing redundant targets can divert processing resources from a concurrent task, producing interference. In the current experiments, we pursue this effect by examining the converse possibility, that the demands of a concurrent central task might limit the efficiency of redundant signal processing.
Measuring the efficiency of redundant-target processing
In a standard redundant-target task, participants make a speeded response to a target presented in either of two channels (e.g., on a visual channel and an auditory channel). On single-target trials, a target appears in only one channel (e.g., only the visual channel); on redundant-target trials, the target is presented in both channels (e.g., on both the visual and auditory channels). The observer responds as soon as a target is detected in either channel, a condition known as a first-terminating stopping rule (Colonius & Vorberg, 1994). Under these conditions, redundant signals generally produce faster responses than single targets, a phenomenon known as a redundant signals effect (RSE) or redundancy gain (Miller, 1982). For example, for a driver approaching a railway crossing, the presentation of both a red flashing light and a loud bell is likely to allow faster detection, and consequently a faster braking response, than either warning presented alone.
The RSE, however, may differ in magnitude under different task constraints, and in some cases, may be entirely absent. The size of the RSE reflects variations in a cognitive system’s architecture and workload capacity (Townsend & Eidels, 2011; Townsend & Nozawa, 1995), where architecture refers to the arrangement of channels (e.g., serial or parallel), and workload capacity refers to the efficiency with which the channels operate concurrently. In addition, the RSE can also reflect variations in inter-channel dependencies (Townsend & Wenger, 2004). The simplest model of the RSE is the unlimited-capacity, independent parallel (UCIP) model, wherein multiple channels operate with stochastic independence and each channel’s rate of processing remains unchanged, regardless of the total number of channels under operation (Townsend & Eidels, 2011). Under a first-terminating stopping rule, the UCIP model produces a redundancy gain simply because the processing time of the system as a whole is based on the output of the fastest channel on each trial. This mechanism is known as statistical facilitation (Raab, 1962). Super-capacity occurs when an increase in the number of operating channels (i.e., workload) results in a corresponding increase in the individual channels’ processing rates, producing a larger RSE than predicted by the UCIP model. Conversely, limited capacity exists when an increase in workload decreases the processing rates of the individual channels, producing a smaller RSE than predicted by the UCIP model. In situations where capacity is highly limited, the redundancy gain may be no different to that of a serial model.
Importantly, unless capacity is extremely limited, mean response times (RTs) alone cannot distinguish gradations in parallel processing capacity within a redundant-target task. To establish whether a system is limited, unlimited, or super-capacity, we therefore need to analyze the data at the level of the RT distributions. As a means of distinguishing between statistical facilitation in the UCIP model and actual processing speed-ups with multiple channels, Miller (1982) established an upper bound on performance for the UCIP model, known as the race-model inequality. The inequality holds that in the UCIP model, the cumulative distribution function (CDF) of the redundant-target trials cannot exceed the combined CDFs for the two categories of single-target trials. Evidence that the CDF for the redundant-target trials exceeds the summed CDFs for the single-target trials at any time t thus disconfirms the UCIP model and implicates a super-capacity model instead. Analogously, Grice et al. (1984) identified a lower bound on UCIP performance, providing a test of extreme capacity limitations. The Miller and Grice inequalities, however, are both conservative tests that are insensitive to modest variations in capacity. Townsend and Nozawa’s (1995) workload capacity coefficient, C(t), provides a more fine-grained measure of efficiency, sensitive to variations in between the Miller and Grice boundaries.
C(t) rests on the conceptualization of the hazard function for speeded responses as a gauge of moment-to-moment cognitive expenditure. In a speeded task, the hazard function, h(t), indicates the instantaneous probability with which a response will occur at time t, given that a response has not yet occurred (Townsend & Ashby, 1983). The integrated hazard function, H(t), is the integral of the hazard function up to time t. Importantly, within the UCIP model, the integrated hazard functions for multiple operating channels are additive. In other words, if processing follows the UCIP model, the value of the integrated hazard function in the redundant-target condition at time t is equal to the sum of the values of the integrated hazard functions of the two single-target conditions at time t. Taking advantage of this constraint, Townsend and Nozawa (1995) define the capacity coefficient, C(t) as,
$$ C(t)=\frac{H_{AB}(t)}{H_A(t)+{H}_B(t)},\kern0.5em t>0, $$
(1)
where HAB(t) refers to the integrated hazard function of the redundant-target condition, and where HA(t) and HB(t) refer to the individual integrated hazard functions for a target present only on channel A or channel B, respectively. Under the UCIP model, in which the integrated hazard functions for channels A and B are additive, C(t) = 1.0. Values of C(t) greater than 1.0 indicate that HAB(t) > HA(t) + HB(t), implying super-capacity. Conversely, values less than 1.0 indicate that HAB(t) < HA(t) + HB(t), implying limited capacity. In extreme cases capacity may be fixed, C(t) = 0.5, implying a zero-sum tradeoff between channels and producing performance akin to that predicted by a serial model.
A transformation of C(t) that can be used to compare performance across experiments is the standardized capacity score, Cz (Houpt & Townsend, 2012). Cz provides a summary capacity measure collapsed over time and suitable for comparison between experimental conditions. Values follow a standard normal distribution, with a score of 0 indicating UCIP-level processing, positive scores indicating super-capacity, and negative scores indicating limited capacity.
The capacity coefficient was developed for examining judgments of displays wherein, on single-target trials, the position of the potential second target is empty. Recent developments have extended the approach to accommodate analysis of displays in which single-target conditions include a distractor in place of the empty space (Little, Eidels, Fific, & Wang, 2015). The measure of processing efficiency in this case has been termed resilience, R(t) (Little et al., 2015). R(t) is calculated with the formula used to calculate C(t), except that the integrated hazard functions in the denominator of the equation represent single-target conditions on which a distractor is present,
$$ R(t)=\frac{H_{AB}(t)}{H_{AX}(t)+{H}_{XB}(t)},\kern0.5em t>0, $$
(2)
where H
AX
(t) is the integrated hazard function for single target A accompanied by a distractor, X, and H
XB
(t) is the integrated hazard function for single target B accompanied by the X. R(t) can, in turn, be converted to a measure of normalized resilience (Houpt & Little, 2017), referred to here as Rz, analogous to Cz. Resilience differs from capacity because, when a distractor is present on single-target trials, it can divert processing resources from the target, slowing target detection (Allen, Madden, Groth, & Crozier, 1992; Ben-David, Eidels, & Donkin, 2014). Resilience, therefore, reflects both the changes in target processing rate that occur as the number of targets increases, and the potential release from interference that occurs when a distractor is replaced by a target.
Interpretation of resilience scores is more involved than interpretation of the workload capacity scores. By definition, channels in the UCIP system operate at the same rate regardless of processing load. Thus, the UCIP model predicts a benchmark value of R(t) = 1 (Rz = 0), just as it predicts a benchmark value of C(t) = 1 (Cz = 0). More generally, a parallel self-terminating model predicts that R(t) will not vary as a function of distractor discriminability, and that redundant-target processing will be equally efficient in the experimental designs with and without distractors, that is, C(t) and R(t) will be equal (Little et al., 2015).
In contrast, a serial self-terminating (SST) model predicts that R(t) will vary with the relative discriminability of the target and distractor. For simplicity, assume a case in which the integrated hazard functions for targets A and B are identical, both with distractors (H
AX
(t) = H
XB
(t)), and without (H
A
(t) = H
B
(t)). On redundant-target trials, the first item processed will always be a target. The integrated hazard function for redundant-target trials will, therefore, equal the integrated hazard function for single-target trials without distractors, i.e., H
AB
(t) = H
A
(t). This reduces Eq. 2 to,
$$ R(t)=\frac{H_A(t)}{2\times {H}_{AX}(t)},\kern0.5em t>0. $$
(3)
On single-target trials, assuming the target position is unpredictable, the number of items that are processed will vary randomly from trial to trial; on some trials only the target will be processed, and on the remaining trials, the distractor will be processed before the target. The difference between H
AX
(t) and H
A
(t) will thus reflect the time needed to process the distractor on those trials on which the target is not processed first. When the time needed to process the distractor is negligible relative to the time needed to process the target, H
AX
(t) will equal H
A
(t), and R(t) will be fixed. When the time needed to process the distractor becomes more substantial, H
AX
(t) decreases and R(t) becomes larger. In other words, the SST model predicts that resilience will be limited when distractor interference is negligible and will increase as distractor interference becomes larger.
But regardless of the underlying architecture, values of R(t) < 1 or Rz < 0 imply that redundant targets are processed slower than predicted by the UCIP model, and values of R(t) > 1 or Rz > 0 imply that redundant targets are processed faster than predicted by the UCIP model (Houpt & Little, 2017). By analogy to the terminology applied to workload capacity, we will describe these effects as limited capacity and super-capacity, respectively. However, it is important to note that these labels describe performance of the multi-channel system as a whole and do not necessarily connote changes in the processing rates of the individual channels. As described above, for example, changes in distractor discriminability within an SST system may change R(t) from less than 1 to greater than 1, even if the target processing rate remains constant.
Redundant presentation of peripheral signals will thus aid detection only if the signals are processed with spare capacity or resilience. Unfortunately, existing data do not make it clear that this will be the case. Empirical data suggest that attention is weighted toward the central visual field (Carrasco, Evert, Chang, & Katz, 1995; Carrasco & Yeshurun, 1998; Wolfe, 1998), and modeling likewise suggests that elemental processing resources are denser in the central retina than in the eccentricity (Miller & Ulrich, 2003). A demanding task in the central visual field might further shift attention away from the retinal eccentricity (Leibowitz & Appelle, 1969; Reimer, 2010), engendering visual tunneling (Williams, 1985). For example, observers have higher detection thresholds for luminance probes in the visual periphery when performing a concurrent central task, with more difficult central tasks producing larger threshold increases (Leibowitz & Appelle, 1969). Similarly, accuracy on a peripheral discrimination task is higher when a concurrent central task is low in perceptual load than when it is high (Williams, 1985). Even task-irrelevant stimuli presented at fixation can interfere with processing of peripheral visual targets (Beck & Lavie, 2005; Schwartz et al., 2005). Within a peripheral redundant-target paradigm with a simultaneous central-load task, such effects might limit processing resilience of peripheral targets, reducing the magnitude of the RSE. In addition, a prominent account of dual-task performance, multiple resource theory, argues that resource competition between tasks drawing on similar processing resources will decrease performance (Wickens, 1981, 2002). According to this theory, within a dual tracking/target detection paradigm, the central tracking task may consume visual processing resources, limiting the attentional resources necessary for processing peripheral items. Based on such an effect, we would expect to see poorer efficiency when the detection task is accompanied by the central tracking task.
To test these possibilities, the current experiments assessed human performance within a dual-task paradigm pairing a central manual tracking task with a peripheral redundant-target task. We examined whether the detection of visual targets observed within a dual-task paradigm produces a redundancy gain, and if so, just how efficiently the processing compares to that of the UCIP model. In Experiments 1 and 2, we used a target detection task to assess processing resilience while performing under both single- and dual-task load. Finally, in Experiment 3, we designed stimuli to preclude parallel target processing to examine resilience within a serial model.