Cerebral palsy is a non-progressive neurological disorder caused by disturbances to the developing brain. Physical and occupational therapy, if started at a young age, can help minimizing complications such as joint contractures, and can improve limb range of motion and coordination. While current forms of therapy for children with cerebral palsy are effective in minimizing symptoms, many children find them boring or repetitive. We have designed a system for use in upper-extremity rehabilitation sessions, making use of a multitouch display. The system allows children to be engaged in interactive gaming scenarios, while intensively performing desired exercises. It supports games which require completion of specific stretching or coordination exercises using one or both hands, as well as games which use physical, or “tangible” input mechanisms. To encourage correct posture during therapeutic exercises, we use a wireless kinematic sensor, worn on the patient's trunk, as a feedback channel for the games. The system went through several phases of design, incorporating input from observations of therapy and clinical sessions, as well as feedback from medical professionals. This paper describes the hardware platform, presents the design objectives derived from our iterative design phases and meetings with clinical personnel, discusses our current game designs and identifies areas of future work.
In this paper we present a novel twist on the classic children’s game, Memory. Here we combine the use of a weighted, centroidal Voronoi diagram and a multitouch tabletop surface to create a board game in which tiles (represented with Voronoi regions) dynamically morph as the game play evolves. This provides a challenge in which players must not only remember the locations of the various tiles, but also track their movements over time.
Introduced in 2005, the Voronoi treemap algorithm is an information visualization technique for displaying hierarchical data. Voronoi treemaps use weighted, centroidal Voronoi diagrams to create a nested tessellation of convex polygons. However, despite appealing qualities, few real world examples of Voronoi treemaps exist. In this paper, we present a multi-touch tabletop application called Involv that uses the Voronoi treemap algorithm to create an interactive visualization for the Encyclopedia of Life. Involv is the result of a yearlong iterative development process and includes over 1.2 million named species organized in a nine-level hierarchy. Working in the domain of life sciences, we have encountered the need to display supplemental hierarchical data to augment information in the treemap. Thus we propose an extension of the Voronoi treemap algorithm that employs force-directed graph drawing techniques both to guide the construction of the treemap and to overlay a supplemental hierarchy.
We present CThru, a self-guided video-based educational environment in a large multi-display setting. We employ a video-centered approach, creating and combining multimedia contents of different formats with a story-telling education video. With the support of new display form factors in the environment, viewing a sequential educational video thread is replaced by the immersive learning experience of hands-on exploration and manipulation in a multi-dimensional information space. We demonstrate CThru with an animation clip in cellular biology, supplementing visible objects in the video with rich domain-specific multimedia information and interactive 3D models. We describe CThru's design rationale and implementation. We also discuss a pilot study and what it revealed with respect to CThru's interface and the usage pattern of the tabletop and the associated large wall display.
We present WeSpace – a collaborative work space that integrates a large data wall with a multi-user multi-touch table. WeSpace has been developed for a population of scientists who frequently meet in small groups for data exploration and visualization. It provides a low overhead walk-up and share environment for users with their own personal applications and laptops. We present our year-long effort from initial ethnographic studies, to iterations of design, development and user testing, to the current experiences of these scientists carrying out their collaborative research in the WeSpace. We shed light on the utility, the value of the multi-touch table, the manifestation, usage patterns and the changes in their workflow that WeSpace has brought about.
Many research projects have demonstrated the benefits of bimanual interaction for a variety of tasks. When choosing bimanual input, system designers must select the input device that each hand will control. In this paper, we argue for the use of pen and touch two-handed input, and describe an experiment in which users were faster and committed fewer errors using pen and touch input in comparison to using either touch and touch or pen and pen input while performing a representative bimanual task. We present design principles and an application in which we applied our design rationale toward the creation of a learnable set of bimanual, pen and touch input commands.
The interoperability of disparate data types and sources has been a long standing problem and a hindering factor for the efficacy and efficiency in visual exploration applications. In this paper, we present a solution, called LivOlay, which enables the rapid visual overlay of live data rendered in different applications. Our tool addresses datasets in which visual registration of the information is necessary in order to allow for thorough understanding and visual analysis. We also discuss initial evaluation and user feedback of LivOlay.
The WeSpace is a long-term project dedicated to the creation of environments supporting walk-up and share collaboration among small groups. The focus of our system design has been to provide 1) groups with mechanisms to easily share their own data and 2)necessary native visual applications suitable on large display environments. Our current prototype system includes both a large high-resolution data wall and an interactive table. These are utilized to provide a focal point for collaborative interaction with data and applications. In this paper, we describe in detail the designs behind the current prototype system. In particular, we present 1) the infrastructure which allows users to connect and visually share their laptop content on-thefly, and supports the extension of native visualization applications, and 2) the table-centric design employed in customized WeSpace applications to support crosssurface interactions. We will also describe elements of our user-centered iterative design process, in particular the results from a late-stages session which saw our astrophysicist participants successfully use the WeSpace to collaborate on their own real research problems.
We investigate the differences – in terms of both quantitative performance and subjective preference – between direct-touch and mouse input for unimanual and bimanual tasks on tabletop displays. The results of two experiments show that for bimanual tasks performed on tabletops, users benefit from direct-touch input. However, our results also indicate that mouse input may be moreappropriate for a single user working on tabletop tasks requiring only single-point interaction.
Co-located collaborators often work over physical tabletops using combinations of expressive hand gestures and verbal utterances. This paper provides the first observations of how pairs of people communicated and interacted in a multimodal digital table environment built atop existing single user applications. We contribute to the understanding of these environments in two ways. First, we saw that speech and gesture commands served double duty as both commands to the computer, and as implicit communication to others. Second, in spite of limitations imposed by the underlying single-user application, people were able to work together simultaneously, and they performed interleaving acts: the graceful mixing of inter-person speech and gesture actions as commands to the system. This work contributes to the intricate understanding of multi-user multimodal digital table interaction.
Touch is a compelling input modality for interactive devices; however, touch input on the small screen of a mobile device is problematic because a user’s fingers occlude the graphical elements he wishes to work with. In this paper, we present LucidTouch, a mobile device that addresses this limitation by allowing the user to control the application by touching the back of the device. The key to making this usable is what we call pseudo-transparency: by overlaying an image of the user’s hands onto the screen, we create the illusion of the mobile device itself being semitransparent. This pseudo-transparency allows users to accurately acquire targets while not occluding the screen with their fingers and hand. LucidTouch also supports multi-touch input, allowing users to operate the device simultaneously with all 10 fingers. We present initial study results that indicate that many users found touching on the back to be preferable to touching on the front, due to reduced occlusion, higher precision, and the ability to make multi-finger input.
Information shown on a tabletop display can appear distorted when viewed by a seated user. Even worse, the impact of this distortion is different depending on the location of the information on the display. In this paper, we examine how this distortion affects the perception of the basic graphical elements of information visualization shown on displays at various angles. We first examine perception of these elements on a single display, and then compare this to perception across displays, in order to evaluate the effectiveness of various elements for use in a tabletop and multi-display environment. We found that the perception of some graphical elements is more robust to distortion than others. We then develop recommendations for building data visualizations for these environments.
Freehand gestural interaction with direct-touch computation surfaces has been the focus of significant research activity recently. While many interesting gestural interaction techniques have been proposed, their design has been mostly ad-hoc and has not been presented within a constructive design framework. In this paper, we develop and articulate a set of design principles for constructing – in a systematic and extensible manner – multi-hand gestures on touch surfaces that can sense multiple points and shapes, and can also accommodate conventional point-based input. To illustrate the generality of these design principles, a set of bimanual continuous gestures that embody these principles are developed and explored within a prototype tabletop publishing application. We carried out a user evaluation to assess the usability of these gestures and use the results and observations to suggest future design guidelines.
Although electronic media has changed how people interact with documents, today’s electronic documents and the environments in which they are used are still impoverished relative to traditional paper documents when used by groups of people and across multiple computing devices. Vertical interfaces (e.g., walls and monitors) afford a less democratic style of interaction than generally observed when people are working around a table. In this paper, we introduce MultiSpace, a research effort which explores the role of the table as a central hub to support ad hoc collaboration in a multi-device environment. The table-centric approach offers new interaction techniques to provide egalitarian access and shared transport of data, supporting mobility and micromobility  of electronic content between tables and other devices. Our observations show how people use these techniques, and how tabletop technology can support and augment collaborative tasks.
A digital tabletop, such as the one shown in Figure 1, offers several advantages over other groupware form factors for collaborative applications. However, users of a tabletop system do not share a common perspective for the display of information: what is presented right-side-up to one participant is upsidedown for another. In this paper, we survey five different rotation and translation techniques for objects displayed on a direct-touch digital tabletop display. We analyze their suitability for interactive tabletops in light of their respective input and output degrees of freedom, as well as the precision and completeness provided by each. We describe various tradeoffs that arise when considering which, when and where each of these techniques might be most useful.
In this paper, we discuss our adaptation of a single-display, single-user commercial application for use in a multi-device, multi-user environment. We wrap Google Earth, a popular geospatial application, in a manner that allows for synchronized coordinated views among multiple instances running on different machines in the same co-located environment. The environment includes a touch-sensitive tabletop display, three vertical wall displays, and a TabletPC. A set of interaction techniques that allow a group to manage and exploit this collection of devices is presented.
In many environments, it is often the case that input is made to displays that are positioned non-traditionally relative to one or more users. This typically requires users to perform interaction tasks under transformed input-display spatial mappings, and the literature is unclear as to how such transformations affect performance. We present two experiments that explore the impact of display space position and input control space orientation on user’s subjective preference and objective performance in a docking task. Our results provide guidelines as to optimal display placement and control orientation in collaborative computing environments with one or more shared displays.