We provide an analysis of pairs of children interacting with a multi-touch tabletop exhibit designed to help museum visitors learn about evolution and the tree of life. The exhibit’s aim is to inspire visitors with a sense of wonder at life’s diversity while providing insight into key evolutionary concepts such as common descent. We find that children negotiate their interaction with the exhibit in a variety of ways including reactive, articulated, and contemplated exploration. These strategies in turn influence the ways in which children make meaning through their experiences. We consider how specific aspects of the exhibit design shape these collaborative exploration and meaning-making activities.
In this paper we describe visitor interaction with an interactive tabletop exhibit on evolution that we designed for use in natural history museums. We video recorded 30 families using the exhibit at the Harvard Museum of Natural History. We also observed an additional 50 social groups interacting with the exhibit without video recording. The goal of this research is to explore ways to develop “successful” interactive tabletop exhibits for museums. To determine criteria for success in this context, we borrow the concept of Active Prolonged Engagement (APE) from the science museum literature. Research on APE sets a high standard for visitor engagement and learning, and it offers a number of useful concepts and measures for research on interactive surfaces in the wild. In this paper we adapt and expand on these measures and apply them to our tabletop exhibit. Our results show that visitor groups collaborated effectively and engaged in focused, on-topic discussion for prolonged periods of time. To understand these results, we analyze visitor conversation at the exhibit. Our analysis suggests that social practices of game play contributed substantially to visitor collaboration and engagement with the exhibit.
Multi-touch technology lends itself to collaborative crowd interaction (CI). However, common tap-operated widgets are impractical for CI, since they are susceptible to accidental touches and interference from other users. We present a novel multi-touch interface called FlowBlocks in which every UI action is invoked through a small sequence of user actions: dragging parametric UI-Blocks, and dropping them over operational UI-Docks. The FlowBlocks approach is advantageous for CI because it a) makes accidental touches inconsequential; and b) introduces design parameters for mutual awareness, concurrent input, and conflict management. FlowBlocks was successfully used on the floor of a busy natural history museum. We present the complete design space and describe a year-long iterative design and evaluation process which employed the Rapid Iterative Test and Evaluation (RITE) method in a museum setting.
In this paper, we present the DeepTree exhibit, a multi-user, multi-touch interactive visualization of the Tree of Life. We developed DeepTree to facilitate collaborative learning of evolutionary concepts. We will describe an iterative process in which a team of computer scientists, learning scientists, biologists, and museum curators worked together throughout design, development, and evaluation. We present the importance of designing the interactions and the visualization hand-in-hand in order to facilitate active learning. The outcome of this process is a fractal-based tree layout that reduces visual complexity while being able to capture all life on earth; a custom rendering and navigation engine that prioritizes visual appeal and smooth fly-through; and a multi-user interface that encourages collaborative exploration while offering guided discovery. We present an evaluation showing that the large dataset encouraged free exploration, triggers emotional responses, and facilitates visitor engagement and informal learning.
Cerebral palsy is a non-progressive neurological disorder caused by disturbances to the developing brain. Physical and occupational therapy, if started at a young age, can help minimizing complications such as joint contractures, and can improve limb range of motion and coordination. While current forms of therapy for children with cerebral palsy are effective in minimizing symptoms, many children find them boring or repetitive. We have designed a system for use in upper-extremity rehabilitation sessions, making use of a multitouch display. The system allows children to be engaged in interactive gaming scenarios, while intensively performing desired exercises. It supports games which require completion of specific stretching or coordination exercises using one or both hands, as well as games which use physical, or “tangible” input mechanisms. To encourage correct posture during therapeutic exercises, we use a wireless kinematic sensor, worn on the patient's trunk, as a feedback channel for the games. The system went through several phases of design, incorporating input from observations of therapy and clinical sessions, as well as feedback from medical professionals. This paper describes the hardware platform, presents the design objectives derived from our iterative design phases and meetings with clinical personnel, discusses our current game designs and identifies areas of future work.
In this paper we present a novel twist on the classic children’s game, Memory. Here we combine the use of a weighted, centroidal Voronoi diagram and a multitouch tabletop surface to create a board game in which tiles (represented with Voronoi regions) dynamically morph as the game play evolves. This provides a challenge in which players must not only remember the locations of the various tiles, but also track their movements over time.
Introduced in 2005, the Voronoi treemap algorithm is an information visualization technique for displaying hierarchical data. Voronoi treemaps use weighted, centroidal Voronoi diagrams to create a nested tessellation of convex polygons. However, despite appealing qualities, few real world examples of Voronoi treemaps exist. In this paper, we present a multi-touch tabletop application called Involv that uses the Voronoi treemap algorithm to create an interactive visualization for the Encyclopedia of Life. Involv is the result of a yearlong iterative development process and includes over 1.2 million named species organized in a nine-level hierarchy. Working in the domain of life sciences, we have encountered the need to display supplemental hierarchical data to augment information in the treemap. Thus we propose an extension of the Voronoi treemap algorithm that employs force-directed graph drawing techniques both to guide the construction of the treemap and to overlay a supplemental hierarchy.
We present CThru, a self-guided video-based educational environment in a large multi-display setting. We employ a video-centered approach, creating and combining multimedia contents of different formats with a story-telling education video. With the support of new display form factors in the environment, viewing a sequential educational video thread is replaced by the immersive learning experience of hands-on exploration and manipulation in a multi-dimensional information space. We demonstrate CThru with an animation clip in cellular biology, supplementing visible objects in the video with rich domain-specific multimedia information and interactive 3D models. We describe CThru's design rationale and implementation. We also discuss a pilot study and what it revealed with respect to CThru's interface and the usage pattern of the tabletop and the associated large wall display.
We present WeSpace – a collaborative work space that integrates a large data wall with a multi-user multi-touch table. WeSpace has been developed for a population of scientists who frequently meet in small groups for data exploration and visualization. It provides a low overhead walk-up and share environment for users with their own personal applications and laptops. We present our year-long effort from initial ethnographic studies, to iterations of design, development and user testing, to the current experiences of these scientists carrying out their collaborative research in the WeSpace. We shed light on the utility, the value of the multi-touch table, the manifestation, usage patterns and the changes in their workflow that WeSpace has brought about.
Many research projects have demonstrated the benefits of bimanual interaction for a variety of tasks. When choosing bimanual input, system designers must select the input device that each hand will control. In this paper, we argue for the use of pen and touch two-handed input, and describe an experiment in which users were faster and committed fewer errors using pen and touch input in comparison to using either touch and touch or pen and pen input while performing a representative bimanual task. We present design principles and an application in which we applied our design rationale toward the creation of a learnable set of bimanual, pen and touch input commands.
The interoperability of disparate data types and sources has been a long standing problem and a hindering factor for the efficacy and efficiency in visual exploration applications. In this paper, we present a solution, called LivOlay, which enables the rapid visual overlay of live data rendered in different applications. Our tool addresses datasets in which visual registration of the information is necessary in order to allow for thorough understanding and visual analysis. We also discuss initial evaluation and user feedback of LivOlay.
The WeSpace is a long-term project dedicated to the creation of environments supporting walk-up and share collaboration among small groups. The focus of our system design has been to provide 1) groups with mechanisms to easily share their own data and 2)necessary native visual applications suitable on large display environments. Our current prototype system includes both a large high-resolution data wall and an interactive table. These are utilized to provide a focal point for collaborative interaction with data and applications. In this paper, we describe in detail the designs behind the current prototype system. In particular, we present 1) the infrastructure which allows users to connect and visually share their laptop content on-thefly, and supports the extension of native visualization applications, and 2) the table-centric design employed in customized WeSpace applications to support crosssurface interactions. We will also describe elements of our user-centered iterative design process, in particular the results from a late-stages session which saw our astrophysicist participants successfully use the WeSpace to collaborate on their own real research problems.
We investigate the differences – in terms of both quantitative performance and subjective preference – between direct-touch and mouse input for unimanual and bimanual tasks on tabletop displays. The results of two experiments show that for bimanual tasks performed on tabletops, users benefit from direct-touch input. However, our results also indicate that mouse input may be moreappropriate for a single user working on tabletop tasks requiring only single-point interaction.
Co-located collaborators often work over physical tabletops using combinations of expressive hand gestures and verbal utterances. This paper provides the first observations of how pairs of people communicated and interacted in a multimodal digital table environment built atop existing single user applications. We contribute to the understanding of these environments in two ways. First, we saw that speech and gesture commands served double duty as both commands to the computer, and as implicit communication to others. Second, in spite of limitations imposed by the underlying single-user application, people were able to work together simultaneously, and they performed interleaving acts: the graceful mixing of inter-person speech and gesture actions as commands to the system. This work contributes to the intricate understanding of multi-user multimodal digital table interaction.
Touch is a compelling input modality for interactive devices; however, touch input on the small screen of a mobile device is problematic because a user’s fingers occlude the graphical elements he wishes to work with. In this paper, we present LucidTouch, a mobile device that addresses this limitation by allowing the user to control the application by touching the back of the device. The key to making this usable is what we call pseudo-transparency: by overlaying an image of the user’s hands onto the screen, we create the illusion of the mobile device itself being semitransparent. This pseudo-transparency allows users to accurately acquire targets while not occluding the screen with their fingers and hand. LucidTouch also supports multi-touch input, allowing users to operate the device simultaneously with all 10 fingers. We present initial study results that indicate that many users found touching on the back to be preferable to touching on the front, due to reduced occlusion, higher precision, and the ability to make multi-finger input.
Information shown on a tabletop display can appear distorted when viewed by a seated user. Even worse, the impact of this distortion is different depending on the location of the information on the display. In this paper, we examine how this distortion affects the perception of the basic graphical elements of information visualization shown on displays at various angles. We first examine perception of these elements on a single display, and then compare this to perception across displays, in order to evaluate the effectiveness of various elements for use in a tabletop and multi-display environment. We found that the perception of some graphical elements is more robust to distortion than others. We then develop recommendations for building data visualizations for these environments.
Freehand gestural interaction with direct-touch computation surfaces has been the focus of significant research activity recently. While many interesting gestural interaction techniques have been proposed, their design has been mostly ad-hoc and has not been presented within a constructive design framework. In this paper, we develop and articulate a set of design principles for constructing – in a systematic and extensible manner – multi-hand gestures on touch surfaces that can sense multiple points and shapes, and can also accommodate conventional point-based input. To illustrate the generality of these design principles, a set of bimanual continuous gestures that embody these principles are developed and explored within a prototype tabletop publishing application. We carried out a user evaluation to assess the usability of these gestures and use the results and observations to suggest future design guidelines.