Most papers in the visual search literature begin with the description of a daily task which requires us to locate a target object amongst other distracting objects. Rather than studying the daily tasks themselves, psychologists have tended to reduce these examples to specific lab-based visual search tasks in which participants are instructed to search for a pre-specified item amongst competing distractors whilst response time (RT) and accuracy are recorded. Such tasks have the benefit of a high level of experimental control which has resulted in a very rich understanding in this area (e.g. Wolfe & Horowitz, 2017). However, there are some doubts if many of the principles of visual search based on findings from lab-based studies scale-up to more complicated situations (e.g., Kunar & Watson, 2011). One reason for this is because lab-based visual search tasks often fail to capture the full range of classes of real-world searches. Kunar and Watson (2011) conducted a series of experiments in a complex but highly controlled multi-dimensional asynchronous dynamic (MAD) world to assess how basic elements (i.e. motion, luminance changes, high set-sizes, loosely-defined target/template and target uncertainty) of real-world search effected search efficiency. Their overall conclusion was that visual search principles previously shown in the literature do not apply to more complex and ‘realistically’ designed displays. This highlights the need to design lab-based tasks which have high experimental control whilst capturing any specific components of real-world tasks that a researcher may want to understand.
Many real-world visual search tasks encompass more than just search. In some dynamic visual search tasks, we must track the changing spatial locations of target and distractor items as they move around the environment. The ability to do this has been extensively studied using the multiple object tracking (MOT) paradigm which requires participants to allocate attention to and continuously track multiple moving objects (see Meyerhoff et al., 2017, for a review). In other real-world tasks, such as CCTV monitoring, the operator must search the monitors and detect the occurrence of any suspicious activity. This task aligns with change detection experiments where people’s ability to detect specific changes (e.g., the suspicious activity) in a visual scene is assessed (see Rensink, 2002, for a review). The real-world tasks researchers seek to understand are complex and often involve components of visual search, MOT and change detection, yet these three paradigms are most commonly discussed and researched in isolation. Clearly, it is advantageous to develop novel tasks that capture and combine components of existing paradigms.
Numerous occupations require search (visual search) in amongst multiple moving objects (MOT) where the goal is to detect a critical event (change detection). For example, lifeguards are required to search dynamic aquatic environments for the occurrence of dangerous events such as drowning; and CCTV operators must monitor a bank of screens to detect suspicious behaviour. In these examples the environment observed constantly changes with high possibilities of occlusion and changing motion patterns: factors that are commonly studied using an MOT paradigm (e.g. Flombaum et al., 2008; Luu & Howe, 2015). In such tasks, the visual environment consists of a set of items where there are numerous potential targets and thus their status could change at any point. For example, all individuals in a swimming pool could drown such that, at any point, each could require saving and become a ‘target’. Moreover, these occupations require search for a critical event and thus capture elements of both dynamic visual search and change detection. We therefore developed a novel dynamic visual search for an orientation change task to incorporate these specific components of real-world tasks. Importantly, we are using the term dynamic to refer to items that are constantly changing spatial location rather than changing feature information (e.g. Van der Burg et al., 2008).
Although the effect of motion on visual search has received a lot of attention in the visual search literature, there remains little consensus on its effect. McLeod et al. (1988) showed that search for targets defined by a conjunction of the features movement and form was done in parallel. They therefore proposed a motion filtering account involving a search system that filtered by movement such that attention could be directed to stimuli with a common movement characteristic (i.e., stationary or moving items), making subsequent search for a remaining single characteristic (e.g. target form) easier. Since then, motion has been shown to aid target detection (e.g., Abrams & Christ, 2005; Franconeri & Simons, 2003), reduce search efficiency (e.g. Kunar & Watson, 2011), or have no effect (e.g. Hulleman, 2009). Such discrepant results emerge due to the different paradigms used to assess the effect of motion on search. Of most relevance to our experiments, Hulleman’s (2009, 2010) work combines an MOT and search paradigm. Participants searched for T’s amongst L’s in either static or moving (i.e. based on MOT) search displays and had similar search slopes for both target present and target absent trials (Hulleman, 2009). In subsequent work, Hulleman (2010) again found no evidence for a difference between static and moving search displays when the task was relatively easy (Experiments 1 and 2) but evidence for a drop in performance when participants were forced to keep track of individual items (i.e., the task was made harder; Experiments 3 and 4). Pratt et al. (2010) also combined an MOT and search paradigm in which participants tracked items moving around a display and had to respond as quickly as possible when they saw the object disappear. In an ‘inanimate’ condition, the items moved in a predictable manner if they collided with each other or the frame and in an ‘animate’ condition an item moved unpredictably without having collided with another item. Response time was faster to targets that underwent animate motion which led the authors to conclude that motion changes that are not due to an external event (e.g., a collision) capture attention. Taken together, this research shows that the effect of motion on search is display- and task- specific which reinforces the need to develop lab-based search tasks that model the components of the real-world task researchers attempt to simulate specifically.
One characteristic of several real-world search tasks that has received little attention in the search literature is that the status of an item changes, rendering one item a ‘target’ and the others as ‘distractors’. For example, an individual could be swimming safely one minute and then encounter difficulty shortly after, making this swimmer the target of a lifeguarding search. In low level terms, these types of events are distinguished by changes in motion characteristics or visual appearance and therefore are relevant to the question of the extent to which feature changes in an item can be detected. Some studies have examined the ability to detect such changes within an MOT framework. Sears and Pylyshyn (2000) showed that target form changes were identified faster than non-target form changes and Bahrami (2003) showed participants were more likely to detect color and shape changes in targets than distractors. Vater et al. (2016) showed that changes in target motion (a change in speed) were detected faster than changes in target form (a change in shape). In these studies, however, the target item was known to participants prior to the onset of a trial which is not representative of many dynamic search tasks in which all items in a display could potentially become a target.
Pylyshyn et al. (2008) used a probe detection task where participants were required to monitor for the occurrence of small dots that could occur anywhere on the screen. Participants completed a standard tracking condition in which they had to both track the targets and detect the presence of a probe and a control condition where they were not required to track targets. In both conditions, participants detected more probes on static non-target items than moving non-target items suggesting that the motion of non-target items impaired detection of the probe. To better understand the extent to which motion impairs the detection of a probe, collecting RT is beneficial as typically done in the visual search literature but less commonly used within an MOT framework. In other related work, Tripathy and Barrett (2004) developed a task which assessed participants’ ability to detect a deviation from the linear trajectory of moving items. In their Experiments 3 and 4, all items were potential targets (i.e., could deviate from a linear trajectory) thus requiring participants to monitor the trajectories of all items simultaneously. They showed that when one item changed trajectory (i.e., became the target), the detection threshold to identify this change rose steeply with the number of items within a display. However, few other studies have investigated the situation where there are numerous potential targets, and thus must be monitored, and target identity is only apparent later. More research is required to better understand how people track objects while searching for a target that is signalled by a change in status and other types of changes, such as feature changes, also require consideration.
Here, we sought to investigate the effect of motion on the detection of a visual change within a dynamic visual search framework. In two experiments, we introduce a novel dynamic visual search task for a change event. Experiment 1 explored the effect of set size and object motion (stationary or moving) on change detection time and Experiment 2 explored whether there was an additional cost associated with detecting a feature change that occurred on a moving target compared with a static target.
Experiment 1
Experiment 1 examined the effect of set size and object motion on the time to detect an orientation change in a Gabor patch. This study was pre-registered on the Open Science Framework (OSF, https://osf.io/6gs72/).
Participants
Thirty undergraduate students from the University of Bristol (19 female, with a mean age of 19.87 years, SD = 2.01) took part in return for course credit. Participants in both experiments had self-reported normal or corrected-to-normal vision.
Design
A repeated measures design with set size (1, 2, 3, 4, 5, 6, 7, or 8 targets) and object motion (static or moving) as the independent variables and time to detect an orientation change as the dependent variable was used.
Procedure
Participants sat approximately 40 cm away from a 21″ LCD monitors with a resolution of 1920 * 1080 pixels refreshing at 60 Hz used to present stimuli. Participants were tested in groups in a large computing laboratory (which precluded completely standardising luminance and viewing distance, so we report RGB and pixel values). Stimuli consisted of Gabor patches (striped sinusoidal gratings within a Gaussian envelope, and mean RGB value of 128, 128, 128, matching the background color, with maximum and minimum RGB values of 255, 255, 255 and 0, 0, 0 representing 100% contrast). The visible diameter of the Gabor was 64 pixels. The background remained a uniform grey (RGB 128, 128, 128) throughout the experiment. At the beginning of each trial, a white fixation cross (“+”) was displayed in the centre of the screen. A number of targets (between 1 and 8) were then displayed on screen in random locations (at least 70 pixels away from the screen edge and other targets). At the start of the trial, all items were oriented vertically. In the stationary condition, the targets remained in their original locations for the entirety of the trial. In the motion condition, the targets began moving after 500 ms and targets moved along randomly selected trajectories at a constant randomly chosen speed between 85 and 254 and pixels per second. If targets collided with the screen edge they rebounded. If targets collided with one another they rebounded off each other (i.e., ballistic motion). After a random duration between 2000 and 4000 ms had elapsed, one randomly selected target would change orientation by a 30° rotation anti-clockwise (see Fig. 1, top right corner). One item underwent an orientation change of every trial such that there were no target-absent trials. Participants were instructed to press the left mouse button of a standard USB mouse as soon as they detected a change. After a response was recorded, a blank screen was displayed for 1000 ms before the next trial commenced. There were two blocks of 240 experimental trials (i.e. 30 trials per condition), with object motion and set size randomly intermixed across blocks. There were five practice trials.