Skip to main content
  • Original Article
  • Open access
  • Published:

Design of embodied interfaces for engaging spatial cognition

Abstract

Aspects of spatial cognition, specifically spatial skills, are strongly correlated with interest and success in STEM courses and STEM-related professions. Because growth in STEM-related industries is expected to continue for the foreseeable future, it is important to develop evidence-based and theoretically grounded methods and interventions that can help train relevant spatial skills. In this article, we discuss research showing that aspects of spatial cognition are embodied and how these findings and theoretical developments can be used to influence the design of tangible and embodied interfaces (TEIs). TEIs seek to bring interaction with digital content off the screen and into the physical environment. By incorporating physical movement and tangible feedback in digital systems, TEIs can leverage the relationship between the body and spatial cognition to engage, support, or improve spatial skills. We use this knowledge to define a design space for TEIs that engage spatial cognition and illustrate how TEIs that are designed and evaluated from a spatial cognition perspective can expand the design space in ways that contribute to the fields of cognitive science and human computer interaction.

Significance

The research and conceptualizations presented here integrate leading-edge developments to the fields of cognitive science and human computer interaction (HCI) and provide a new framework for future work. Through our work, we have developed a critical analysis of previously developed systems that highlights the relationships between body and spatial cognition that tangible and embodied interactive systems support. We have also developed our own content for tangible systems, illustrating how designing from a spatial cognition perspective can lead to new design and research opportunities. Our research and design process both informs the design of interactions with computational systems and leads to new knowledge about the relationships between the body, the digital medium, and cognition (in particular, spatial cognition).

The design space presented in this article illustrates the link between the way people interact with tangible and embodied interfaces (TEIs), the content the systems present, and the aspects of spatial cognition those systems engage. The design space classifies aspects of spatial cognition based on how they relate to the body and shows how a scale-based description of interactions is useful for linking interactions with spatial cognition. These classifications are useful for both cognitive science and human computer interaction researchers. Importantly, they lay out a matrix for future designs dealing with spatial cognition and also provide an analytical framework for the discussion of existing designs. Our own projects illustrate how designing digital systems from a spatial cognition perspective may lead to the creation of a strong sense of embodiment and, as a result, new ways to engage spatial cognition by leveraging the unique capabilities of digital media.

Introduction

Between Super and Bachrach’s (1957) report to the meta-analysis of Wai, Lubinski, and Benbow (2009) of several longitudinal studies of student aptitude, success, and careers, there is nearly 60 years of research showing a strong relationship between success in STEM fields and spatial abilities (Lubinski & Benbow, 2006; Shea, Lubinski, & Benbow, 2001; Super & Bachrach, 1957; Wai et al., 2009; Webb, Lubinski, & Benbow, 2007). Spatial abilities, as they are discussed in those studies and defined by Lohman (1979), are “the ability to generate, retain, retrieve and transform well-structured visual images.” For example, mental rotation ability refers to the ability to construct a mental representation of an object and then draw some conclusions about that object after it has undergone some transformation. Mental rotation is just one of many spatial abilities that have been linked to success and pursuit of careers in STEM fields (Newcombe & Shipley, 2015).

This relationship between STEM and spatial abilities has led to calls from governments for research about how to support these skills in the education system (President’s Council of Advisors on Science and Technology, 2012; ServiceOntario Publications, 2014). Researchers have answered that call by generating new knowledge and creating research networks and centers like the Spatial Intelligence and Learning Center (SILC). Some researchers like Quarles and Wu have focused on creating tools that support the use of spatial skills in learning complex systems (e.g. Quarles, Lampotang, Fischler, Fishwick, & Lok, 2008; Wu et al., 2011). Other researchers have looked at which specific spatial skills are used in science disciplines, which could lead to new strategies about how best to support them (Resnick & Shipley, 2013). However, little research has been done to determine the best methods and approaches for improving spatial skills in general and, in particular, in students with low spatial ability scores, even though there is research showing that it may be possible (Uttal et al., 2013).

Recent research indicates that there is a link between the body, actions, and spatial cognition (Avraamides, Loomis, Klatzky, & Golledge, 2004; Boroditsky, 2000; Chandrasekharan, Athreya, & Srinivasan, 2006; Golledge, 1999; Lozano, Hard, & Tversky, 2007; May, 2004; Morsella & Krauss, 2004; Mou, McNamara, Valiquette, & Rump, 2004; Portugali, 1996; Taylor & Tversky, 1992; Tversky, 2000; Tversky & Hard, 2009). While the specific cognitive mechanisms underlying this link are still under debate, the fact that the body plays an active role in the cognition of space is, at this point, well-established. Because of the link between body, action, and space, it is likely that designing spatial-skill interventions that engage the body and action systems will be effective. The goal of our recent work, and the present paper, is to establish a framework for these interventions which can be used to inform the design of technologies that effectively train spatial skills.

Digital media has made it possible to create experiences of simulated objects and spaces that enable people to apply spatial skills in ways that would be impossible in typical real-world interactions (Benedikt, 1991; Murray, 1997; Nitsche, 2008). Digital media can also be used to create environments with spatial properties that are very different from the real world (Ambinder, Wang, Crowell, Francis, & Brinkmann, 2009; Kortemeyer, Tan, & Schirra, 2013). Practically speaking, digital media interfaces have tended to focus on the visual experience of a system’s output (e.g. high-definition and high-framerate graphics) rather than on creating input devices that engage the body and motor system. While user interfaces like windowed operating systems use spatial metaphors for their organizational structures, the physical connection to the space of infinitely nested folders is limited to pointing and clicking actions. Likewise, video games include sophisticated simulated environments and countless ways to interact with and traverse them, but the physical input experience is often limited to pressing buttons on a controller. Until recently, there has been a very limited relationship between the spaces of the digital content and the physical world. That is, the relatively arbitrary action–effect relationships established in these systems are quite simple and disconnected from the rich array of sensorimotor experiences with tight spatial relationships experienced in real life.

An evolving field of HCI, known as TEI, aims to bridge the gap between the physical environment and digital content. TEIs incorporate physical objects and sensing systems to give people ways to use their bodies to interact with digital information. An emerging trend in TEI research is how embodied interaction can leverage or be leveraged by embodied cognition research. In this paper, we describe the relationship between the embodied aspects of TEIs and the embodied aspects of spatial cognition.

In this paper, we review and present evidence that spatial cognition is shaped by bodily and action states and describe the field of tangible and embodied interfaces through related theoretical perspectives. We then define a design space that links TEIs with aspects of spatial cognition through the concept of scale. We then place different TEIs within the design space and discuss each of the systems with respect to how the content of the systems leverages the relationship between the body and spatial cognition for different purposes. We then show how designing TEIs from a spatial cognition perspective leads to new relationships between the body, digital content, and space. The paper closes with a discussion of the challenges we face in applying our research to STEM education and practice as well as the broader impacts of our work.

Background

Our research draws from and integrates cognitive science research showing that spatial cognition is shaped by the body and action, and from design research focused on digital systems that incorporate tangible and embodied interactions.

Spatial cognition is shaped by bodily and action states

Ultimately, perceptual and cognitive processes have developed through evolution to enable individuals to act in the environment and achieve a series of goals to survive. The ways in which individuals can achieve the goals, and indeed the goals themselves, are determined by the characteristics of the environment in which they are acting (e.g. steepness of hills, how far objects are apart) and action capabilities of the body (e.g. healthy, fatigued, or hungry). Tremendous research efforts have been dedicated to understanding how sensory information about the body and the environment is converted in action. Some of these recent studies of spatial abilities, perception, and navigation have provided evidence for the ways that spatial cognition and the body are related. This research has been shaped by and led to the development of two main approaches to understanding spatial and other cognitive processes. These two main approaches are ideomotor theory and ecological-based embodied cognition. Even though these approaches differ on the underlying mechanisms thought to support the interaction between the body, action, and cognition, both perspectives often lead to similar predictions on how the action system is tightly linked with and influences perception and cognition.

Ideomotor theory has emerged out of information processing and representational accounts of the series of events that occur from sensation, through perception and cognition, to action. According to information processing-based theories, perceptions, thoughts, and actions are represented and, importantly, stored in the brain by the activity of specific subsets of neurons. Earlier information processing models were based on the notion that sensory information is converted into actions through a series of processing stages that occur in a single direction (sensation–perception–cognition–action planning) because the representations activated in the subsequent stages were determined by the results of the preceding stages. Thus, although perception is connected to action, the influence is unidirectional with results of the perceptual processing helping to activate and shape actions, but not the reverse (i.e. action would not shape perception). Ideomotor theories (e.g. Hommel, Müsseler, Aschersleben, & Prinz, 2001) are based on the notion that perception and action systems are reciprocally interconnected and that, through experience, representations of specific sensory/perceptual events become tightly bound to the representations of specific actions. The result of the perception–action binding is a set of bidirectional interactions wherein the activation of a perceptual representation (via the reception of a specific series of sensations) can automatically prime the associated action, and conversely the activation of an action representation (via a decision and desire to move) can automatically activate the representation of the associated perceptual event. It is through this latter action-to-perception direction of activation that the action and bodily systems can prime or shape perceptual representations and processes.

The embodied approaches to cognition have largely evolved from ecological approaches to cognition (e.g. Gibson, 1966) which, in contrast to information processing theories, reject the notion of representations. Instead of action selection and planning being the result of a series of processing events relying on representations, action possibilities (affordances) are directly drawn (perceived) from the environment. Because the bodily and action status of the individual influences the types of actions the individual can perform, these factors influence the affordances and perception of the environment. In this way, perception and other cognitive events are embedded firmly in the body and the environment in which body is acting, with the consequence of perception and cognition being tightly linked to and shaped by the bodily and action state of the body.

In the case of spatial abilities, a growing series of studies have revealed that the action and bodily state of the individual influences the efficiency of these abilities. For example, a series of studies on mental rotation have shown that people perform congruent actions when solving complex mental rotation tasks, implying that spatial skills are scaffolded or enhanced by activation of the motor system (Chandrasekharan et al., 2006; Wohlschläger, 2001). Other studies have shown similar relationships between the body and spatial skills including perspective taking (Hegarty & Waller, 2004), scaling (Proffitt, 2013), and navigation and orientation (Darken, Peterson, & Orientation, 2001; Hegarty, Montello, Richardson, Ishikawa, & Lovelace, 2006). Similarly, perception of objects and spaces has been shown to be related to the state of the body. Proffitt and colleagues have shown, for example, that wearing a heavy backpack makes hills appear steeper (Bhalla & Proffitt, 1999) and distances appear longer (Witt, Proffitt, & Epstein, 2005). These effects are related to health and age (Bhalla & Proffitt, 1999) and skill (Taylor, Witt, & Sugovic, 2011; Witt & Proffitt, 2005). Finally, the body has been shown to play a role in the construction of mental representations of navigable spaces. Hegarty et al. (2006) showed that people who physically navigate an environment are better able to make judgments about the distances and directions between points of interest than people who navigated a virtual environment or watched a video of the environment being navigated. Overall, regardless of whether one prefers an ideomotor-based or ecological-based account of these findings, it is clear that the body and action history of the individual has been shown to influence the perception of space and the application of spatial skills.

TEIs engage spatial cognition through the body and action

Given the relationships between the body, action, and space, it makes sense to design digital media-based interventions that engage spatial skills through physical movement. Digital technology has made it possible to create environments that engage spatial skills in novel ways. Because TEIs couple physical movements with the flexibility of digital technology, they are particularly well-suited for systems that engage and develop spatial skills.

Early TEIs used physical objects to represent digital data. For example, Durrel Bishop’s Marble Answering Machine links a digital recording of a voicemail with a physical marble (Poynor, 1995). To listen to the voicemail, a person places the marble in a particular spot on the machine. The machine recognizes the marble and plays the message. To delete the message, a person just puts the marble back in the machine’s hopper. Systems like the Marble Answering Machine make it possible for people to use the same skills they use in the physical world to engage with digital content. The “System descriptions” section later in the paper describes several tangible interfaces in detail, with respect to how they engage spatial cognition.

Extending this line of thinking has led to a wide variety of novel tangible interfaces and ways to think about them. Recently, researchers have become interested in the cognitive aspects of using the body to engage with digital content. Antle and Wang (2013) showed that a tangible puzzle interface leads to an increase in the use of epistemic actions. This study compared the motor-cognitive strategies people used when solving a jigsaw puzzle on a touch screen or when using a tangible interface that used the puzzle pieces as physical objects on an interactive tabletop (Antle & Wang, 2013). People using the tangible interface took less time to solve the puzzle, were more likely to sort the pieces, and made more movements—like rotating the pieces or testing placements. People use similar strategies when using tangible interfaces as when using traditional physical tools. Based on the current interest in embodied cognition within the TEI community, van Dijk, van der Lugt, and Hummels (2013) defined three “flavors” of embodied cognition and discussed how they relate to the design of TEI systems: distributed representation and computation (DRC), socially situated practice (SSP), and sensorimotor coupling and enactment (SCE). Of the three, SCE most directly relates to systems that specifically target spatial cognition. SCE refers to the ways that movement and perception inform each other and lead to behavior and cognition. However, there is little research from the field of TEI or from the cognitive sciences that addresses how SCE plays out in embodied digital systems:

If we want to design systems that support the way people make sense of the world around them, the question is whether sensorimotor couplings are going to be enough, or whether sensorimotor theory is mostly useful when designing for ‘bodily phenomena’, like sports, or feelings of stress. That is, can sensorimotor theory help us get a grip on the more ‘cognitive’ activities for which we normally would use words like ‘remembering’, ‘thinking’, ‘representing’, ‘deciding’, ‘creativity’, ‘communication’, and so on? (van Dijk et al., 2013).

Our research addresses this question directly by drawing design inspiration from known relationships between the body and spatial cognition and evaluating systems designed from this perspective using cognitive science methods. This research approach has led to the definition of the design space presented below and has informed our own design work, which has broadened our understanding of the relationships between body, action, and space from both design and cognitive science perspectives.

Defining the design space

Based on our analysis of existing TEIs, spatial cognition research, and our own experiences designing relevant systems, we identified three important elements of interactive systems that engage spatial cognition: the embodied aspect of the system (i.e. how the system engages the body and action systems), the aspect of spatial cognition the system engages, and the intervention that the system supports. With these elements in mind, the design space shown in Fig. 1 can be defined for digital systems that use a dynamic combination of embodiment and intervention to engage, support, or alter some aspect of spatial cognition.

Fig. 1
figure 1

The design space defines embodied interfaces that engage spatial cognition in terms of the way that they engage the body (Embodiment), the aspect of spatial cognition they engage, and the spatial task that they ask a user to perform (Intervention)

The following sections describe embodiment, aspects of spatial cognition, and intervention and the different categories they contain.

Embodiment

The “embodiment” aspect describes the ways that systems engage the body and action. The list shown in Fig. 1 is not, and cannot be, exhaustive. Any time a designer creates a new way for people to use their bodies to interact with technology, a new item could be added to the list. This particular set of ways that TEIs engage the body is drawn from our own analysis, which focused on TEIs that leverage embodiment as a way to engage spatial cognition.

Each method for engaging the body is classified based on a parameter-termed scale—figural, vista, or environmental—as defined by Montello (1993). Figural scale embodied interactions involve grasping and moving physical objects or controlling virtual objects as if they were real objects that could be manipulated using the hands. Vista scale embodied interactions create ways to engage with large content that is visible at a distance or alter the visible qualities of a vista scale space. Establishing embodiment at an environmental scale requires users to navigate a physical or virtual environment. The distinctions between and characteristics of these scales will be expanded upon and made more clear in a subsequent section on “System descriptions”.

Aspects of spatial cognition

The “aspects of spatial cognition” axis lists the different aspects of spatial cognition that are engaged by tangible and embodied interaction systems. The categories—abilities, perception, and navigation—broadly group different aspects of spatial cognition. They are drawn from analysis of the different types of spatial cognition described in prior research (see http://www.silccenter.org/index.php/resources/testsainstruments for examples) and focus, in particular, on the skills that have been shown to relate to the body.

The abilities category contains spatial skills that relate to the performance of mental operations on images or objects. Spatial abilities typically relate to operations that could be performed on figural scale objects, such as rotation or assembly. For example, mental rotation is the ability to mentally represent an object and operate on that object. Mental rotation has been shown to be linked to the body with tests showing the effect of performing congruent and non-congruent actions when attempting to solve complicated mental rotation problems (Chandrasekharan et al., 2006). These findings lead to the idea that the motor system is leveraged in the use of mental rotation skills.

The perception category lists the elements of an environment and the objects it contains that are perceived differently depending on the state of the body. For example, a series of studies by Proffitt and colleagues have shown that wearing a heavy backpack, which would make it more difficult and effortful to move around, makes hills appear steeper (Bhalla & Proffitt, 1999) and distances appear longer (Witt et al., 2005). Such findings support the notion that the action potential of the body influences the perception of environmental characteristics.

Finally, the navigation category highlights the aspects of an environment or the qualities of a mental representation of that environment that are influenced by physical movement through that environment. For example, Hegarty, et al. reported that people who watched a video of someone else navigating an environment made poorer judgements about distances and directions than people who actually physically navigated the environment (Hegarty et al., 2006). This finding indicates that experience involving both perception and action in an environment enhances navigation of that space to a greater degree than experience involving perception alone.

Intervention

Intervention is a descriptive column that is filled in with the content of the systems that will be plotted in the diagram. Interventions include the tasks that the system presents and the ways the system responds to input in the service of accomplishing that task. Interventions create the link between the embodied aspects of the system and spatial cognition by creating relationships between body movement (embodied interactions) and the spatial aspects of the content.

Descriptions of the interventions are drawn from our analysis of existing TEIs and are an attempt at a high-level description of what the user does with the system. As with the descriptions of embodiment, the column cannot include an exhaustive list since new systems will constantly create new things for users to do.

Intervention and embodiment are tightly coupled. Whether designers start with an intervention or with a method for establishing embodiment in mind, each must be designed in a way that supports the other. For example, an intervention that requires a person to walk around a room would not work well with a system that establishes embodiment by grasping and moving objects. The system would need to be updated with either a method for tracking walking or an intervention that uses grasping. Although this conclusion seems obvious, the strong link between scale of embodiment and intervention has implications for the aspect of spatial cognition that can be engaged by any given system.

Summary

The three elements this diagram brings together—embodiment, intervention, and spatial cognition—describe the aspects of TEI design that work together to create tangible and embodied interactive systems that engage spatial cognition. By defining these elements and the categories they contain and illustrating their relationships on a diagram, we outline a design space for TEIs that fit within the focus of our research. This diagram provides a starting point for designers interested in working in this space and researchers interested in using TEIs for spatial cognition research. The systems described in the following sections were selected as good examples of systems that leverage embodiment to engage spatial cognition. Our analysis led to the understanding that embodiment and spatial cognition are linked through intervention. Plotting these relationships on the diagram presented above leads to observable trends and opportunities for research in this space.

System descriptions

To better understand the relationships between embodiment, intervention, and spatial cognition as they relate to the design of interactive systems, we analyzed several classic interactive systems that engage the body and spatial cognition. The systems presented here were selected to be representative of the range of ways in which TEI systems have engaged spatial cognition from an interaction perspective and does not constitute a comprehensive set of TEI systems that relate to spatial abilities. These systems engage the body at a particular scale (embodiment) and ask the user to perform some task (intervention). The combination of embodiment and intervention may engage a particular aspect of spatial cognition related to representation, perception, or navigation. We define each system using the language of the design space and then plot the systems on the design space diagram to highlight trends and opportunities for research in the fields of cognitive science and HCI.

Figural

The systems described in this section engage the body at a figural scale, by asking people to manipulate physical or virtual objects to accomplish some task.

FoldIt

FoldIt was developed by Cooper et al. (2010) as a way to crowdsource the task of finding the correct conformation of protein molecules. The conformation, or shape, of a protein molecule determines how it is used in a biological system. However, the specific shape of a protein cannot be directly inferred from its chemical structure and computers on their own are not particularly good at determining these shapes. FoldIt supported the use of people’s innate spatial skills to solve this problem.

FoldIt presents a virtual, three-dimensional (3D) image of a protein molecule which is not yet in its correct shape and enables a user to alter the shape of the molecule using a mouse. It was released to the public as downloadable software and has, at this point, engaged more than 57,000 players and shown that humans perform better on protein-folding tasks than computers (Cooper et al., 2010).

FoldIt turns a microscopic protein molecule into a figural scale object, which makes it possible for users to apply small-scale spatial abilities to alter its shape and develop an understanding of the kinds of relationships in the molecule that determine its form. Specifically, it establishes embodiment through the use of a mouse, which acts as a virtual proxy for the user’s hands. The intervention presents a virtual representation of a physical object and updates the shape of the object based on the user’s interactions. The system enables users to apply mental rotation and scaling skills to solve the problem of finding the correct shape for the molecule.

URP

URP was developed by Underkoffler and Ishii (1999) at the MIT Media Lab. URP is a tabletop-based urban planning simulation tool that urban planners can use to understand the relationships between building location, time of day, shadows, and wind.

The system tracks the position and orientation of physical objects on the surface of the table and displays the results of a simulation based on the placement of the objects. The objects in URP include models of buildings and a clock for changing the time of day. A user grasps and moves the building objects to set their positions and turns the hands of the clock to change the time. The system responds by displaying wind vectors and shadows on the surface, allowing users to develop an understanding of how the placement of the buildings alters the properties of the environment.

URP establishes embodiment by providing physical objects that a user grabs and moves. The intervention asks users to move the buildings and responds by graphically displaying the results of the simulation it runs. Because the objects are small-scale models of the buildings and the graphical display is a scaled representation of the environment, URP engages scaling ability along with providing a tool for students to develop mental rotation skills as they related to building and environmental scale objects.

Topobo

Topobo, developed by Raffle, Parkes, and Ishii (2004), is a construction kit that can record and play back physical movements. Topobo consists of a set of passive and active blocks. The passive blocks snap together to build small animal-like models. The active blocks contain a motor and snap together with the passive blocks. When a user rotates blocks connected to an active block, the system remembers the direction, speed, and magnitude of the rotation. The rotations can then be played back by the system. By coordinating the rotations of several “limbs” connected to active blocks in an animal-like model, users can create walking sculptures. Creating a successful walk requires considering aspects of the model like balance and direction of forces (Raffle et al., 2004).

Topobo establishes embodiment by providing physical blocks that are assembled into a larger object. This object itself can be grasped and manipulated. The intervention asks users to create an animated object and plays back that animation in the physical object. Building the object engages skills related to assembly. Animating the object engages skills related to force and motion or mechanical reasoning.

Vista

Projection mapping and the Mystery Spot

Projection mapping techniques align digital projections with boundaries and surfaces of physical objects, from building facades, to furniture and walls. For example, The 600 Years, developed by Dan Gregor and Amar Mulabegović in 2010, projects onto the façade of a clock tower in Prague to depict the interweaving of the history of the tower and the country. Using projection mapping techniques, the designers make the tower appear to crumble and rebuild, catch fire, turn inside out, and merge with the sky (TheMacula - The 600 years, Astronomical Clock Tower, Prague, n.d.). By changing the coloration and creating the appearance of movement along these physical features, projection mapping can cause objects to appear to have spatial properties that are physically impossible. For example, rooms can be made to look larger than they actually are and building facades can appear to rotate or grow and shrink.

The Mystery Spot is a tourist attraction outside of Santa Cruz, California (The Mystery Spot Official Website, n.d.). By constructing a cabin with walls that are perpendicular to a hillside, instead of vertical with respect to gravity, the Mystery Spot creates illusions where balls appear to roll up hill and short people appear taller than tall people. The way the Mystery Spot is constructed breaks the link between visual perception and the proprioceptive and vestibular systems causing objects and environments to appear to have spatial properties that do not make sense in the physical world.

While the Mystery Spot is not a digital, interactive system, it highlights ways that the body and perception of space are linked and illustrates techniques that designers of interactive systems might employ to create new ways to interact with vista scale spaces. By combining projection mapping techniques with embodied interactions, designers could create vista scale environments with spatial properties that change based on the bodies and movements that engage with them.

Ping Pong Plus is one early example of tracking objects and altering projections based on their movement (Ishii, Wisneski, Orbanes, Chun, & Paradiso, 1999). The Ping Pong Plus system uses projections to modify a standard game of ping pong. In one game, holes appear in the ping pong table wherever the ball bounces. Players lose the point if the ball hits an empty space (Ishii et al., 1999).

These systems would establish embodiment through the visual and vestibular systems and through walking around the rooms and interacting with objects and spaces in the room. Interventions could ask users to alter spatial properties to accomplish some task and would engage aspects of spatial cognition related to the perception of angles, distances, and directions that are constructed based on the state of the body.

SMSlingShot

SMSlingshot is an interface intended to create a sense of agency in public spaces by enabling people to write short messages and display them on the side of a building using an interface based on a slingshot. It was developed by Patrick Tobias Fischer, Thilo Hoffman, Sebastian Piatza, and Christian Zoellner in 2010 (Fischer, Hornecker, & Zoellner, 2013).

SMSlingShot is a physical slingshot augmented with a cell phone keypad. Users enter a message using the keypad then aim and shoot the slingshot at a distant wall. A projector then shines the message on the wall where the user aimed it. SMSlingShot does not necessarily alter the perceived spatial qualities of an environment, but it does illustrate an embodied interaction that enables people to interact with digital content in vista scale spaces (Fischer et al., 2013).

In SMSlingShot, typing the message and then aiming and drawing of the strings of the physical slingshot to send the message establishes embodiment. The intervention asks people to alter the visual qualities of a distant object. Depending on the content of the projections, a system like this could engage aspects of spatial cognition related to perception of distance and size.

Environmental

Slower Speed of Light

Slower Speed of Light was developed by Gerd Kortemeyer, Philip Tan, and Steven Schirra in 2013 as a way to help physics students understand Einstein’s theory of special relativity (Kortemeyer et al., 2013). Slower Speed of Light is a video game played from a first person point of view in a navigable virtual environment. In Slower Speed of Light, players use a keyboard to move around a small environment collecting tokens. Each time a player collects a token, the speed of light in the virtual world slows down. As the speed of light slows down, the speed of the player’s movement and the speed of light converge. The game engine changes the visible qualities of the environment to show the effects of relativity, especially of moving nearly light speed.

Slower Speed of Light establishes embodiment using navigation of a virtual environment controlled by a keyboard. The intervention asks people to move through the environment and collect tokens and responds by altering the properties of the environment that relate to the theory of relativity. This system engages aspects of spatial cognition related to distance and heading estimation, construction of route and survey knowledge, and the relationship between distance and time.

feelSpace

The feelSpace belt was developed by Saskia K. Nagel and her collaborators at the Institute of Cognitive Science in Osnabrük, Germany to investigate the plasticity of the sensorimotor system by attempting to create a new sense (Riener & Ferscha, 2008). feelSpace is a belt with several vibration motors and a digital compass stitched into it. The motors are in contact with the person all around the person’s waist. When a person wears the belt and walks around, the vibration motor oriented towards the north constantly vibrates; when the person rotates, the motor that was oriented to the north and was vibrating turns off and the neighboring motor that is now oriented north starts to vibrate. Designers of the system asked users to wear the belt for several weeks and observed that, over time, people stopped noticing the vibration but were able to incorporate it into their sense of direction. In one study, blind participants who had worn the belt for an extended period of time were led around a complex path and asked to point back to the start of the path (Kärcher, Fenzlaff, Hartmann, Nagel, & König, 2012). Wearing the belt was shown to improve this particular spatial skill called homing.

feelSpace establishes embodiment through a coupling between the location of a constantly present physical sensation and the physical movement of the person through an environment. The intervention simply asks users to wear the belt as they go about their day-to-day lives. The system engages aspects of spatial cognition related to homing, heading recall, and the construction of survey and route knowledge.

Discussion

Defining this design space and using it to describe existing TEIs leads to a more thorough understanding of how the body, action, and spatial cognition are related in different systems. Plotting these systems on the design space diagram provides a clear picture of the design trends and the design and research opportunities for embodied interfaces that engage spatial cognition.

Populating the design space diagram

Figure 2 shows the result of plotting these systems with respect to the scale of their interaction and the aspect of spatial cognition they engage. The name of each of the systems described above is plotted on the diagram in relation to the way it engages the body and the aspect of spatial cognition its intervention appears to engage. Because none of these systems have been evaluated explicitly for their relationships to spatial cognition, the shades of blue highlight the aspects of spatial cognition a system is most likely to engage and how directly.

Fig. 2
figure 2

Plotting existing TEI systems on the design space diagram shows that different combinations of embodiment and intervention engage different aspects of spatial cognition

The intervention column is filled in with descriptions of the task that the system asks the user to perform. As a descriptive column, it serves as a reminder of what the names of the systems mean, but it is the systems, in their entireties, that support the relationship between the body and spatial cognition. This relationship is fully encompassed by the name of the system. The descriptions of the interventions are primarily a reminder of how the systems support that relationship.

Each system plotted in Fig. 2 incorporates relationships between embodiment and spatial cognition in different ways. Illustrating these relationships leads to insights about both design trends within the TEI community and opportunities for research in both TEI design and cognitive science. The trends and opportunities, which are described in detail in the following section, provide starting points for the design of new systems which can be evaluated from a spatial cognition perspective. The “Tangibles for Augmenting Spatial Cognition” project, discussed later in this article, illustrates how working within this design space can lead to interfaces that engage specific aspects of spatial cognition and how to evaluate that engagement.

Trends and opportunities

Figure 3 highlights the trends and opportunities for research in this design space:

Fig. 3
figure 3

The populated design space diagram shows a trend in the relationship between the scale of embodiment of a system and the aspect of spatial cognition that system engages. It also makes it clear that there are opportunities for research and design of systems that create new relationships between the body and space

The main trend highlighted by Fig. 3 is that the scale of interaction tends to relate to the aspect of spatial cognition a system engages. While some systems do cross the boundaries of the spatial cognition classifications, broadly speaking, figural scale embodiment relates to small-scale spatial abilities, vista scale embodiment relates to perceptual effects, and environmental embodiment relates to navigation abilities. This trend can give designers a starting point for either designing a system to target a specific aspect of spatial cognition or selecting an aspect of spatial cognition that a particular system may be well-suited to engage. The reasons for this trend are unclear, but it may be that simply selecting the right tool for the job (or the right job for the tool) led designers to the sweet spots that balance scale of embodiment with the different aspects of spatial cognition. If that is the case, then the gaps in the diagram reveal opportunities for research in this space.

First, there is an opportunity to design systems that more directly engage perspective-taking abilities. Our analysis did not find any TEI systems that engage perspective taking, even though it has been shown to be linked to the body (Tversky & Hard, 2009). Again, this could be related to the fact that, until recently, technology that engaged the body in a way that related well to perspective taking was cumbersome and expensive. Increased availability of virtual reality equipment may lead to additional systems that engage perspective-taking skills.

A second opportunity is for the development of systems that engage the body across multiple scales. Again, access to technology may be the limiting factor here. Large systems that alter the apparent properties of vista scale environments or that outfit environmental scale spaces with sensors and that respond to people’s movement through them have, up until recently, been expensive and complicated to develop. The increasing availability of technologies like Arduino microcontrollers, high-resolution and high-frame rate cameras, and brighter projectors with very high resolutions will continue to make building large and complex interactive systems more approachable. Combining these large-scale systems with tangible objects or virtual reality headsets can create systems that use figural scale interactions to engage vista and environmental scale aspects of spatial cognition, or that more readily translate large-scale data to figural scale interactive systems.

Finally, by focusing the design of systems specifically on engaging spatial cognition, designing interfaces that target a single spatial skill and that include controls that limit the use of additional skills can lead to new research opportunities in the cognitive and psychological sciences. In particular, as the case studies below will show, drawing inspiration from the cognitive sciences to help inform and shape how interfaces establish embodiment and how interventions target specific skills can lead to interfaces that can be evaluated using methods from cognitive science and psychology to show their specific effects on spatial cognition. Changing specific elements of the interface and intervention and evaluating the effect of those changes can lead to insights about how specific aspects of embodiment are related to spatial cognition.

Expanding the design space

Our own projects, emBodied Digital Creativity (BDC) and Tangibles for Augmenting Spatial Cognition (TASC), illustrate cognition-focused design and show how it can lead to systems that create new experiences and support cognitive science research. Each of these projects expands the design space presented in this article by leveraging sensing and display technologies to create novel embodied experiences. These projects combine figural and vista scale methods for establishing embodiment, attempt to eliminate variables that might confound their effects on spatial cognition, and show how the use of spatial skill evaluations can lead to new insights about the relationship between the body, spatial cognition, and digital media.

BDC: emBodied Digital Creativity

BDC was a National Science Foundation-funded project that sought to develop a system that leveraged ideomotor coding to create a strong sense of identification with a virtual avatar and use that system to support creativity. BDC was developed in the Synaesthetic Media Lab at Georgia Tech between 2008 and 2011. The system creates a one-to-one mapping between the movements of a human-shaped puppet (Fig. 4) and a virtual 3D avatar. To use the system, a person wears the puppet between her shoulders and knees (Fig. 4) and holds on to the puppet’s arms. The puppet moves along with the person and generates the data necessary for rotating the joints of the avatar into the correct position.

Fig. 4
figure 4

The BDC interface hangs from the shoulders and attaches at the knees in such a way that it moves with the person wearing it

Cognitive science research informed the project during all phases. The first phase of the project used a common coding approach to design, build, and evaluate an interface that mapped own body movement onto a virtual character to create a strong sense of identification with the avatar (Mazalek et al., 2009, 2010). During the second phase of the project, we developed and evaluated an intervention focused on engaging mental rotation ability because of its link to creative problem solving (Mazalek et al., 2011).

The mental rotation intervention, shown in Fig. 5, asked players to use the puppet to make an avatar reach out and touch floating teapots while a virtual camera moved around the avatar. The camera movement caused the player to experience shifts in the spatial relationship between her physical and virtual bodies. This intervention forced players to continuously perform mental rotation in order to act in a constantly changing environment. We evaluated how the intervention influenced mental rotation skills by asking participants to complete a standard mental rotation test before and after playing the teapot game (Mazalek et al., 2011).

Fig. 5
figure 5

In the teapot touching intervention, the camera flies around the virtual avatar, continuously altering the spatial relationship between the avatar and player. The player must perform mental rotation to use the puppet controller to make the avatar touch teapots as they appear

BDC expands the design space by establishing embodiment at both the figural and vista scales by leveraging ideomotor coding and physical feedback to create identification with a remote body. The intervention targeted mental rotation ability by creating an embodied experience that forces players to continuously perform mental rotation.

TASC: Tangibles for Augmenting Spatial Cognition

The TASC project is supported by a grant from the Canadian Social Sciences and Humanities Research Council and is part of ongoing research aimed at identifying the important design considerations for embodied interactive systems that support spatial abilities, specifically for STEM learning. The TASC intervention targets perspective taking because of the correlations between perspective taking and large-scale spatial cognition and map reading (Hegarty & Waller, 2004; Liben & Downs, 1993).

The TASC system (Fig. 6) combines a virtual reality headset that tracks head movements, a separate sensor that tracks hand movements, and a set of physical blocks that are tracked by the table beneath them. The player can look around the environment and see her hands as she reaches out, grabs, and moves the physical blocks. During the intervention (Fig. 7), a player standing on the ground sees a bridge which is obstructed by two large blocks. To the left or right, the player can see tunnels through the blocks. The player switches to a top-down view of the environment in order to grasp and move the blocks. The player must align the tunnels with the bridge using the information obtained from the ground-level view. This intervention forces the player to perform perspective taking in order to cross the bridge.

Fig. 6
figure 6

The TASC system establishes embodiment through a combination of virtual reality with head and hand tracking and physical feedback provided by the blocks, which can be grasped and moved

Fig. 7
figure 7

The TASC intervention asks players to align tunnels in virtual blocks to create a clear path between the player’s position in the virtual world and some goal. The player must work from multiple perspectives to determine the position of the tunnels and to move the blocks to the correct position

The system bridges figural and vista scale embodiment by combining multiple types of embodied interaction with an intervention that requires working from multiple points of view. The intervention specifically targets perspective taking, which had not been supported by previous systems. It is currently being tested using a digital version of the perspective-taking evaluation developed by Kozhevnikov and Hegarty (Hegarty et al., 2006; Kozhevnikov & Hegarty, 2001).

The expanded design space

Plotting BDC and TASC on the design space diagram reveals how their focus on engaging spatial cognition leads to novel methods of establishing embodiment and expands the design space, as illustrated in Fig. 8.

Fig. 8
figure 8

The BDC and TASC systems expand the design space by combining figural and vista scale embodiment and targeting perspective taking. BDC uses a figural scale puppet controller to manipulate a distant avatar. TASC uses virtual reality to enable a player to interact with a single environment at both the figural and vista scales

By leveraging ideomotor coding to establish embodiment, BDC and TASC bridge the figural and vista scale interactions. Both display representations of own movement at a distance in such a way that performing a figural scale action is experienced as engaging with vista scale content. Both projects manipulate the spatial relationship between the physical body and the digital environment in order to engage spatial cognition. By incorporating multiple points of view and identification with remote bodies in to their design, each system engages perspective-taking skills, something that previous TEI systems have not accomplished. The following sections describe the challenges and opportunities we have uncovered during our work on BDC and TASC and that face researchers working in this design space.

Conclusion and further considerations

Given the strong link between the body and spatial cognition, TEIs are particularly well suited to engage, support, or alter different spatial skills. This paper presents a design space that highlights the ways that current systems exploit the link between body, action, and space to achieve their goals. Our own work on BDC and TASC illustrate how this design space definition can be useful for creating systems that intentionally engage particular aspects of spatial cognition. Evaluating these systems for their effect on spatial skills provides new understanding of ways to support spatial skills as well as the ways those spatial skills are related to the body. This research aims to eventually lead to new interventions in educational contexts and has impacted the field of HCI and the cognitive sciences.

Broader impacts

Our research makes contributions to the field of HCI and the cognitive sciences. By applying research showing that cognition is embodied to the design of interactive systems and evaluating them to show that engaging the body can support cognition, our research has led to a new approach to HCI. This approach is novel in both the interaction methods it employs and the goals of the systems that it produces. Traditional HCI aims to develop systems that make completing a task as efficient as possible. By focusing on supporting cognition, our systems do not necessarily make solving problems easier or faster, but they create new ways for people to think about the problems they solve, which may lead to greater transferability of problem-solving skills across situations. The interventions that these systems include do not necessarily help people do work. Instead they create situations in which skills can be developed that support work in different contexts.

Evaluating these systems using methods from the cognitive sciences has already shown that embodied interaction can lead to a sense of identification with digital content (Mazalek et al., 2009, 2010) and that this identification can be leveraged to improve mental rotation ability (Mazalek et al., 2011). Future work in this space will refine our understanding of how the body and cognition relate and eventually lead to systems that more effectively exploit these relationships in support of different system goals.

Future work

Our future work in the space of TEI and spatial cognition will incorporate two threads: refining the framework and design space presented in this paper by designing and evaluating new systems and integrating our research with educational practice with user-centered design practices.

Framework and design space

The design space presented in this paper and the design framework that it establishes lead to new ways to think about the design of TEI systems. The idea that TEIs can engage specific aspects of spatial cognition through embodiment and intervention leads to a refined sense of the impact of design decisions on user experience and cognition. Conversely, the clarification of how, through design, TEIs can leverage specific spatial skills to support learning, work, and task accomplishment can lead to novel interactive systems.

In order to refine the design space and increase its utility, we will conduct the following future work. First, we will support the design space definition with fundamental research. Evaluating the TASC system to determine its effect on perspective-taking skills and to determine the specific design elements that create that effect will give us a more nuanced understanding of the effect of design decisions on a system’s relationship to spatial cognition. Based on our fundamental research and discussions with the research community, we will refine the design space so that it can support new research in both cognitive science and HCI. From our refined design space, we will develop and test new TEIs that incorporate aspects of design that best engage spatial cognition. These iterative phases of designing, building, and testing TEIs will lead to continued refinement of the design space as well as produce new knowledge related to cognition of space.

Practical integration with education practice

Given recent research linking movement, comprehension and retention (Kontra, Lyons, Fischer, & Beilock, 2015), and body and action to spatial cognition, the physical movements supported by TEIs may provide opportunities to incorporate physical movements into curricula. Ultimately, understanding the ways that TEIs can engage spatial cognition can lead to approaches to improving spatial cognition that target STEM education. But before that can happen, there are countless challenges to making changes to curricula in both formal and informal learning environments, e.g. cost, time, and government standards. Therefore, education practices, tools, and curricular requirements must be considered all together when developing ways to systematically incorporate spatial skills into education. Determining the best approach to meeting these challenges will require feedback from all stakeholders in order to understand their needs and include them in the design and implementation of new tools and practices. Early phases of moving our work towards educational practice have included gathering feedback and information from educators about the challenges and opportunities associated with targeting spatial skills and incorporating new technologies in educational practices. Over the course of this project, we will host workshops and interview stakeholders within education systems. Our findings from these workshops and interviews will be incorporated in the to the design framework so that it provides a way to discuss, not only the relationship between body, space, and action, but also a particular design’s utility for education.

As we continue to refine our designs and the framework, we aim to identify opportunities to work directly with educators and students to design, develop, implement, and test a system that functions, from a holistic/systemic point of view, to support, alter, and improve spatial cognition in high-priority populations.

References

  • Ambinder, M. S., Wang, R. F., Crowell, J. A., Francis, G. K., & Brinkmann, P. (2009). Human four-dimensional spatial intuition in virtual reality. Psychonomic Bulletin & Review, 16(5), 818–823. doi:10.3758/PBR.16.5.818

    Article  Google Scholar 

  • Antle, A. N., & Wang, S. (2013). Comparing motor-cognitive strategies for spatial problem solving with tangible and multi-touch interfaces. In Proceedings of the 7th International Conference on Tangible, Embedded and Embodied Interaction (pp. 65–72). New York: ACM. doi:10.1145/2460625.2460635

    Chapter  Google Scholar 

  • Avraamides, M. N., Loomis, J. M., Klatzky, R. L., & Golledge, R. G. (2004). Functional equivalence of spatial representations derived from vision and language: evidence from allocentric judgments. Journal of Experimental Psychology Learning Memory and Cognition, 30(4), 801–814. doi:10.1037/0278-7393.30.4.804

    Article  Google Scholar 

  • Benedikt, M. (Ed.). (1991). Cyberspace: first steps. Cambridge: MIT Press.

    Google Scholar 

  • Bhalla, M., & Proffitt, D. R. (1999). Visual–motor recalibration in geographical slant perception. Journal of Experimental Psychology: Human Perception and Performance, 25(4), 1076–1096. doi:10.1037/0096-1523.25.4.1076

    PubMed  Google Scholar 

  • Boroditsky, L. (2000). Metaphoric structuring: understanding time through spatial metaphors. Cognition, 75(1), 1–28. doi:10.1016/S0010-0277(99)00073-6

    Article  PubMed  Google Scholar 

  • Chandrasekharan, S., Athreya, D., & Srinivasan, N. (2006). Twists and Oliver Twists in mental rotation: complementary actions as orphan processes. In Proceedings of the 28th Annual Conference of the Cognitive Science Society (pp. 1092–1097). Vancouver: Sheridan.

    Google Scholar 

  • Cooper, S., Khatib, F., Treuille, A., Barbero, J., Lee, J., Beenen, M., …Players, F. (2010). Predicting protein structures with a multiplayer online game. Nature, 466(7307), 756–760. doi:10.1038/nature09304

  • Darken, R. P., Peterson, B., & Orientation, B. S. (2001). Spatial orientation, wayfinding, and representation. In K. M. Stanney (Ed.), Handbook of virtual environments: design, implementation, and applications (pp. 493–518). Mahwah: Erlbaum.

    Google Scholar 

  • Fischer, P. T., Hornecker, E., & Zoellner, C. (2013). SMSlingshot: an expert amateur DIY case study. In Proceedings of the 7th International Conference on Tangible, Embedded and Embodied Interaction (pp. 9–16). New York: ACM. doi:10.1145/2460625.2460627

    Chapter  Google Scholar 

  • Gibson, J. J. (1966). The senses considered as perceptual systems. Boston: Houghton Mifflin.

    Google Scholar 

  • Golledge, R. G. (1999). Wayfinding behavior: cognitive mapping and other spatial processes. Baltimore: JHU Press.

    Google Scholar 

  • Hegarty, M., Montello, D. R., Richardson, A. E., Ishikawa, T., & Lovelace, K. (2006). Spatial abilities at different scales: Individual differences in aptitude-test performance and spatial-layout learning. Intelligence, 34(2), 151–176. doi:10.1016/j.intell.2005.09.005

    Article  Google Scholar 

  • Hegarty, M., & Waller, D. (2004). A dissociation between mental rotation and perspective-taking spatial abilities. Intelligence, 32(2), 175–191. doi:10.1016/j.intell.2003.12.001

    Article  Google Scholar 

  • Hommel, B., Müsseler, J., Aschersleben, G., & Prinz, W. (2001). The Theory of Event Coding (TEC): A framework for perception and action planning. Behavioral and Brain Sciences, 24(5), 849–878. doi:10.1017/S0140525X01000103

    Article  PubMed  Google Scholar 

  • Ishii, H., Wisneski, C., Orbanes, J., Chun, B., & Paradiso, J. (1999). PingPongPlus: design of an athletic-tangible interface for computer-supported cooperative play. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 394–401). New York: ACM. doi:10.1145/302979.303115

    Google Scholar 

  • Kärcher, S. M., Fenzlaff, S., Hartmann, D., Nagel, S. K., & König, P. (2012). Sensory augmentation for the blind. Frontiers in Human Neuroscience, 6, 37. doi:10.3389/fnhum.2012.00037

    Article  PubMed  PubMed Central  Google Scholar 

  • Kontra, C., Lyons, D. J., Fischer, S. M., & Beilock, S. L. (2015). Physical experience enhances science learning. Psychological Science, 26(6), 737–749.

    Article  PubMed  Google Scholar 

  • Kortemeyer, G., Tan, P., & Schirra, S. (2013). A slower speed of light: developing intuition about special relativity with games. In Proceedings of the International Conference on the Foundations of Digital Games (FDC’13) (pp. 400–402). New York: ACM. http://www.fdg2013.org/program/festival/openrelativity.pdf.

    Google Scholar 

  • Kozhevnikov, M., & Hegarty, M. (2001). A dissociation between object manipulation spatial ability and spatial orientation ability. Memory & Cognition, 29(5), 745–756. doi:10.3758/BF03200477

    Article  Google Scholar 

  • Liben, L. S., & Downs, R. M. (1993). Understanding person-space-map relations: Cartographic and developmental perspectives. Developmental Psychology, 29(4), 739–752. doi:10.1037/0012-1649.29.4.739

    Article  Google Scholar 

  • Lohman, D.F. (1979). Spatial Ability: A Review and Reanalysis of the Correlational Literature. DTIC document. http://oai.dtic.mil/oai/oai?verb=getRecord&metadataPrefix=html&identifier=ADA075972

  • Lozano, S. C., Hard, B. M., & Tversky, B. (2007). Putting action in perspective. Cognition, 103(3), 480–490. doi:10.1016/j.cognition.2006.04.010

    Article  PubMed  Google Scholar 

  • Lubinski, D., & Benbow, C. P. (2006). Study of mathematically precocious youth after 35 years: uncovering antecedents for the development of math-science expertise. Perspectives on Psychological Science, 1(4), 316–345. doi:10.1111/j.1745-6916.2006.00019.x

    Article  PubMed  Google Scholar 

  • May, M. (2004). Imaginal perspective switches in remembered environments: Transformation versus interference accounts. Cognitive Psychology, 48(2), 163–206. doi:10.1016/S0010-0285(03)00127-0

    Article  PubMed  Google Scholar 

  • Mazalek, A., Chandrasekharan, S., Nitsche, M., Welsh, T., Clifton, P., Quitmeyer, A., …Athreya, D. (2011). I’m in the game: embodied puppet interface improves avatar control. In Proceedings of the fifth international conference on Tangible, embedded, and embodied interaction (pp. 129–136). New York: ACM. doi:10.1145/1935701.1935727

  • Mazalek, A., Chandrasekharan, S., Nitsche, M., Welsh, T., Thomas, G., Sanka, T., …Clifton, P. (2009). Giving your self to the game: transferring a player’s own movements to avatars using tangible interfaces. In Proceedings of the 2009 ACM SIGGRAPH Symposium on Video Games (pp. 161–168). New York: ACM. doi:10.1145/1581073.1581098

  • Mazalek, A., Nitsche, M., Chandrasekharan, S., Welsh, T., Clifton, P., Quitmeyer, A., …Kirschner, F. (2010). Recognizing self in puppet controlled virtual avatars. In Proceedings of the 3rd International Conference on Fun and Games (pp. 66–73). New York: ACM. doi:10.1145/1823818.1823825

  • Montello, D. R. (1993). Scale and multiple psychologies of space. In A. U. Frank & I. Campari (Eds.), Spatial Information Theory A Theoretical Basis for GIS (pp. 312–321). Berlin Heidelberg: Springer. doi:10.1007/3-540-57207-4_21

  • Morsella, E., & Krauss, R. M. (2004). The role of gestures in spatial working memory and speech. American Journal of Psychology, 117(3), 411–424. doi:10.2307/4149008

    Article  PubMed  Google Scholar 

  • Mou, W., McNamara, T. P., Valiquette, C. M., & Rump, B. (2004). Allocentric and egocentric updating of spatial memories. Journal of Experimental Psychology Learning Memory and Cognition, 30(1), 142–157. doi:10.1037/0278-7393.30.1.142.

    Article  Google Scholar 

  • Murray, J. H. (1997). Hamlet on the Holodeck: The Future of Narrative in Cyberspace. New York: The Free Press.

    Google Scholar 

  • Newcombe, N. S., & Shipley, T. F. (2015). Thinking about spatial thinking: new typology, new assessments. In J. S. Gero (Ed.), Studying Visual and Spatial Reasoning for Design Creativity (pp. 179–192). Netherlands: Springer. doi:10.1007/978-94-017-9297-4_10.

  • Nitsche, M. (2008). Video game spaces: image, play, and structure in 3D game worlds. Cambridge: MIT Press.

    Book  Google Scholar 

  • Portugali, J. (1996). The Construction of Cognitive Maps. Dordrecht: Springer Science & Business Media.

  • Poynor, R. (1995). The hand that rocks the cradle: Gillian Crampton Smith is making the Royal College of Art’s Computer Related Design program a multimedia powerhouse. ID-New York-Design Publications-, 42, 60.

    Google Scholar 

  • President’s Council of Advisors on Science and Technology. (2012). Report to the President: Engage to Excel. Producing on Million Additional College Graduates with Degrees in Science, Technology, Engineering, and Mathematics. Washington, DC: PCAST. https://www.whitehouse.gov/sites/default/files/microsites/ostp/pcast-engage-to-excel-final_2-25-12.pdf.

  • Proffitt, D. R. (2013). An embodied approach to perception: by what units are visual perceptions scaled? Perspectives on Psychological Science, 8(4), 474–483. doi:10.1177/1745691613489837.

    Article  PubMed  Google Scholar 

  • Quarles, J., Lampotang, S., Fischler, I., Fishwick, P., & Lok, B. (2008). Tangible user interfaces compensate for low spatial cognition. In IEEE Symposium on 3D User Interfaces, 2008. 3DUI 2008 (pp. 11–18). doi:10.1109/3DUI.2008.4476585.

    Chapter  Google Scholar 

  • Raffle, H. S., Parkes, A. J., & Ishii, H. (2004). Topobo: a constructive assembly system with kinetic memory. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 647–654). New York: ACM. doi:10.1145/985692.985774.

    Google Scholar 

  • Resnick, I., & Shipley, T. F. (2013). Breaking new ground in the mind: an initial study of mental brittle transformation and mental rigid rotation in science experts. Cognitive Processing, 14(2), 143–152. doi:10.1007/s10339-013-0548-2.

    Article  PubMed  Google Scholar 

  • Riener, A., & Ferscha, A. (2008). Raising awareness about space via vibro-tactile notifications. In D. Roggen, C. Lombriser, G. Tröster, G. Kortuem, & P. Havinga (Eds.), Smart Sensing and Context (pp. 235–245). Berlin Heidelberg: Springer. doi:10.1007/978-3-540-88793-5_18.

  • ServiceOntario Publications. (2014). Paying Attention to Spatial Reasoning, K-12: Support Document for Paying Attention to Mathematics Education. http://www.edu.gov.on.ca/eng/literacynumeracy/LNSPayingAttention.pdf. Accessed 27 July 2016.

  • Shea, D. L., Lubinski, D., & Benbow, C. P. (2001). Importance of assessing spatial ability in intellectually talented young adolescents: A 20-year longitudinal study. Journal of Educational Psychology, 93(3), 604–614. doi:10.1037/0022-0663.93.3.604.

    Article  Google Scholar 

  • Super, D. E., & Bachrach, P. B. (1957). Scientific careers and vocational development theory: A review, a critique and some recommendations (Vol. xii). Oxford: Columbia Univer.

    Google Scholar 

  • Taylor, H. A., & Tversky, B. (1992). Spatial mental models derived from survey and route descriptions. Journal of Memory and Language, 31(2), 261–292. doi:10.1016/0749-596X(92)90014-O.

    Article  Google Scholar 

  • Taylor, J. E. T., Witt, J. K., & Sugovic, M. (2011). When walls are no longer barriers: perception of wall height in parkour. Perception, 40(6), 757–760. doi:10.1068/p6855.

    Article  PubMed  Google Scholar 

  • The Macula - The 600 years, Astronomical Clock Tower, Prague. (n.d.). http://www.initi.org/workss/600-years/. Accessed 25 Mar 2016.

  • The Mystery Spot Official Website. (n.d.). http://www.mysteryspot.com/. Accessed 18 Dec 2012.

  • Tversky, B. (2000). Some ways that maps and diagrams communicate. In C. Freksa, C. Habel, W. Brauer, & K. F. Wender (Eds.), Spatial Cognition II (pp. 72–79). Berlin Heidelberg: Springer. doi:10.1007/3-540-45460-8_6.

  • Tversky, B., & Hard, B. M. (2009). Embodied and disembodied cognition: Spatial perspective-taking. Cognition, 110(1), 124–129. doi:10.1016/j.cognition.2008.10.008.

    Article  PubMed  Google Scholar 

  • Underkoffler, J., & Ishii, H. (1999). Urp: a luminous-tangible workbench for urban planning and design. In Proceedings of the SIGCHI conference on Human Factors in Computing Systems (pp. 386–393). New York: ACM. doi:10.1145/302979.303114.

    Google Scholar 

  • Uttal, D. H., Meadow, N. G., Tipton, E., Hand, L. L., Alden, A. R., Warren, C., …Newcombe, N. S. (2013). The malleability of spatial skills: a meta-analysis of training studies. Psychological Bulletin, 139(2), 352–402. doi:10.1037/a0028446

  • van Dijk, J., van der Lugt, R., & Hummels, C. (2013). Beyond distributed representation: embodied cognition design supporting socio-sensorimotor couplings. In Proceedings of the 8th International Conference on Tangible, Embedded and Embodied Interaction (pp. 181–188). New York: ACM. doi:10.1145/2540930.2540934.

    Chapter  Google Scholar 

  • Wai, J., Lubinski, D., & Benbow, C. P. (2009). Spatial ability for STEM domains: Aligning over 50 years of cumulative psychological knowledge solidifies its importance. Journal of Educational Psychology, 101(4), 817–835. doi:10.1037/a0016127.

    Article  Google Scholar 

  • Webb, R. M., Lubinski, D., & Benbow, C. P. (2007). Spatial ability: A neglected dimension in talent searches for intellectually precocious youth. Journal of Educational Psychology, 99(2), 397–420. doi:10.1037/0022-0663.99.2.397.

    Article  Google Scholar 

  • Witt, J. K., & Proffitt, D. R. (2005). See the ball, hit the ball: apparent ball size is correlated with batting average. Psychological Science, 16(12), 937–938. doi:10.1111/j.1467-9280.2005.01640.x.

    Article  PubMed  Google Scholar 

  • Witt, J. K., Proffitt, D. R., & Epstein, W. (2005). Tool use affects perceived distance, but only when you intend to use it. Journal of Experimental Psychology: Human Perception and Performance, 31(5), 880–888. doi:10.1037/0096-1523.31.5.880.

    PubMed  Google Scholar 

  • Wohlschläger, A. (2001). Mental object rotation and the planning of hand movements. Perception & Psychophysics, 63(4), 709–718. doi:10.3758/BF03194431.

    Article  Google Scholar 

  • Wu, A., Yim, J.-B., Caspary, E., Mazalek, A., Chandrasekharan, S., & Nersessian, N. J. (2011). Kinesthetic pathways: a tabletop visualization to support discovery in systems biology. In Proceedings of the 8th ACM conference on Creativity and cognition (pp. 21–30). New York: ACM. doi:10.1145/2069618.2069624.

    Chapter  Google Scholar 

Download references

Acknowledgements

The research described here was funded in part by U.S. National Science Foundation grant #0757370, the Social Sciences and Humanities Research Council of Canada Insight Grants program, the Canada Foundation for Innovation, the Ministry of Research and Innovation of Ontario, and the Alberta Ingenuity Fund.

Authors’ contributions

All authors read and approved the final manuscript.

Competing interests

The authors declare that they have no competing interests.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ali Mazalek.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Clifton, P.G., Chang, J.SK., Yeboah, G. et al. Design of embodied interfaces for engaging spatial cognition. Cogn. Research 1, 24 (2016). https://doi.org/10.1186/s41235-016-0032-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s41235-016-0032-5

Keywords