Publications Articles

2008
Shen C, Ryall K, Forlines C, Esenther A, Vernier FD, Everitt K, Wu M, Wigdor D, Ringel Morris M, Hancock M, et al. Collaborative Tabletop Research and Evaluation: Interface and Interactions on Direct-Touch Horizontal Surfaces. In: Interactive Artifacts and Furniture Supporting Collaborative Work and Learning. New York, USA: Springer ; 2008. pp. 111-128. Publisher's Version
2007
TSE EDWARD, GREENBERG SAUL, Shen C, Forlines C. Multimodal multiplayer tabletop gaming. Magazine Computers in Entertainment (CIE) - Interactive TV. 2007;Volume 5 Article No. 12 (Issue 2 April/June 2007) :Article No. 12.Abstract

There is a large disparity between the rich physical interfaces of co-located arcade games and the generic input devices seen in most home console systems. In this article we argue that a digital table is a conducive form factor for general co-located home gaming as it affords: (a) seating in collaboratively relevant positions that give all equal opportunity to reach into the surface and share a common view; (b) rich whole-handed gesture input usually seen only when handling physical objects; (c) the ability to monitor how others use space and access objects on the surface; and (d) the ability to communicate with each other and interact on top of the surface via gestures and verbal utterance. Our thesis is that multimodal gesture and speech input benefits collaborative interaction over such a digital table. To investigate this thesis, we designed a multimodal, multiplayer gaming environment that allows players to interact directly atop a digital table via speech and rich whole-hand gestures. We transform two commercial single-player computer games, representing a strategy and simulation game genre, to work within this setting.

Shen C. From Clicks to Touches: Enabling Face-to-Face Shared Social Interface on Multi-touch Tabletops . In: Online Communities and Social Computing Lecture Notes in Computer Science. Vol. 4564. Springer ; 2007. pp. 169-175.Abstract
Making the interactions with a digital user interface disappears into and becomes a part of the human to human interaction and conversation is a challenge. Conventional metaphor and underlying interface infrastructure for single-user desktop systems have been traditionally geared towards single mouse and keyboard, click-and-type based, WIMP interface design. On the other hand, people usually meet in social context around a table, facing each other. A table setting provides a large interactive visual and tangible surface. It affords and encourages collaboration, coordination, serendipity, as well as simultaneous and parallel interaction among multiple people. In this paper, we examine and explore the opportunities, challenges, research issues, pitfalls, and plausible approaches for enabling direct touchable, shared social interactions on multi-touch multi-user tabletops.
Tse E, Shen C, Greenberg S, Forlines C. “How Pairs Interact Over a Multimodal Digital Table”. Proceedings of the 2007 ACM Conference on Human Factors in Computing Systems (CHI'07) . 2007.Abstract
Co-located collaborators often work over physical tabletops using combinations of expressive hand gestures and verbal utterances. This paper provides the first observations of how pairs of people communicated and interacted in a multimodal digital table environment built atop existing single user applications. We contribute to the understanding of these environments in two ways. First, we saw that speech and gesture commands served double duty as both commands to the computer, and as implicit communication to others. Second, in spite of limitations imposed by the underlying single-user application, people were able to work together simultaneously, and they performed interleaving acts: the graceful mixing of inter-person speech and gesture actions as commands to the system. This work contributes to the intricate understanding of multi-user multimodal digital table interaction.
how_pairs_interact.pdf
Wigdor D, Shen C, Forlines C, Balakrishnan R. “Perception of Elementary Graphical Elements in Tabletop and Multi-Surface Environments” . Proceedings of the 2007 ACM Conference on Human Factors in Computing Systems (CHI'07). 2007.Abstract
Information shown on a tabletop display can appear distorted when viewed by a seated user. Even worse, the impact of this distortion is different depending on the location of the information on the display. In this paper, we examine how this distortion affects the perception of the basic graphical elements of information visualization shown on displays at various angles. We first examine perception of these elements on a single display, and then compare this to perception across displays, in order to evaluate the effectiveness of various elements for use in a tabletop and multi-display environment. We found that the perception of some graphical elements is more robust to distortion than others. We then develop recommendations for building data visualizations for these environments.
perception_of_elementary_graphical_elements_in_tabletop_and_multi-surface_environments.pdf
Forlines C, Wigdor D, Shen C, Balakrishnan R. “Direct-Touch vs. Mouse Input for Tabletop Displays”. Proceedings of the 2007 ACM Conference on Human Factors in Computing Systems (CHI'07) . 2007.Abstract
We investigate the differences – in terms of both quantitative performance and subjective preference – between direct-touch and mouse input for unimanual and bimanual tasks on tabletop displays. The results of two experiments show that for bimanual tasks performed on tabletops, users benefit from direct-touch input. However, our results also indicate that mouse input may be moreappropriate for a single user working on tabletop tasks requiring only single-point interaction.
direct-touch_vs._mouse_input_for_tabletop_displays.pdf
Wigdor D, Forlines C, Baudisch P, Barnwell J, Shen C. “LucidTouch: A See-Through Mobile Device”. Proceedings of the 20th Annual ACM Symposium on User Interface Software and Technology. 2007.Abstract
Touch is a compelling input modality for interactive devices; however, touch input on the small screen of a mobile device is problematic because a user’s fingers occlude the graphical elements he wishes to work with. In this paper, we present LucidTouch, a mobile device that addresses this limitation by allowing the user to control the application by touching the back of the device. The key to making this usable is what we call pseudo-transparency: by overlaying an image of the user’s hands onto the screen, we create the illusion of the mobile device itself being semitransparent. This pseudo-transparency allows users to accurately acquire targets while not occluding the screen with their fingers and hand. LucidTouch also supports multi-touch input, allowing users to operate the device simultaneously with all 10 fingers. We present initial study results that indicate that many users found touching on the back to be preferable to touching on the front, due to reduced occlusion, higher precision, and the ability to make multi-finger input.
lucidtouch_a_see-through_mobile_device.pdf
2006
TSE EDWARD, GREENBERG SAUL, Shen C, Forlines C. MULTIMODAL MULTIPLAYER TABLETOP GAMING. Third International Workshop on Pervasive Gaming Applications - PerGames 2006. 2006.Abstract

Abstract

There is a large disparity between the rich physical interfaces of co-located arcade games and the

generic input devices seen in most home console systems. In this paper we argue that a digital

table is a conducive form factor for general co-located home gaming as it affords: (a) seating in

collaboratively relevant positions that give all equal opportunity to reach into the surface and

share a common view, (b) rich whole handed gesture input normally only seen when handling

physical objects, (c) the ability to monitor how others use space and access objects on the surface,

and (d) the ability to communicate to each other and interact atop the surface via gestures and

verbal utterances. Our thesis is that multimodal gesture and speech input benefits collaborative

interaction over such a digital table. To investigate this thesis, we designed a multimodal,

multiplayer gaming environment that allows players to interact directly atop a digital table via

speech and rich whole hand gestures. We transform two commercial single player computer games,

representing a strategy and simulation game genre, to work within this setting.

pergame2006bestpaper.multimodalgaming.pdf
Ryall K, Ringel Morris M, Everitt K, Forlines C, Shen C. “Experiences With and Observations of Direct-Touch Tables”, in IEEE International Workshop on Horizontal Interactive Human-Computer Systems (TableTop). Adelaide, Australia ; 2006. Publisher's VersionAbstract
Co-located collaborators often work over physical tabletops with rich geospatial information. Previous research shows that people use gestures and speech as they interact with artefacts on the table and communicate with one another. With the advent of large multi-touch surfaces, developers are now applying this knowledge to create appropriate technical innovations in digital table design. Yet they are limited by the difficulty of building a truly useful collaborative application from the ground up. In this paper, we circumvent this difficulty by: (a) building a multimodal speech and gesture engine around the Diamond Touch multi-user surface, and (b) wrapping existing, widely-used off-the-shelf single-user interactive spatial applications with a multimodal interface created from this engine. Through case studies of two quite different geospatial systems – Google Earth and Warcraft III – we show the new functionalities, feasibility and limitations of leveraging such single-user applications within a multi user, multimodal tabletop. This research informs the design of future multimodal tabletop applications that can exploit single-user software conveniently available in the market. We also contribute (1) a set of technical and behavioural affordances of multimodal interaction on a tabletop, and (2) lessons learnt from the limitations of single user applications.
experiences_with_and_observations_of_direct-touch_tables.pdf
Everitt K, Shen C, Ryall K, Forlines C. “MultiSpace: Enabling Electronic Document Micro-mobility in Table-Centric, Multi-Device Environments”, in IEEE International Workshop on Horizontal Interactive Human-Computer Systems (TableTop). Adelaide, Australia ; 2006. Publisher's VersionAbstract
Although electronic media has changed how people interact with documents, today’s electronic documents and the environments in which they are used are still impoverished relative to traditional paper documents when used by groups of people and across multiple computing devices. Vertical interfaces (e.g., walls and monitors) afford a less democratic style of interaction than generally observed when people are working around a table. In this paper, we introduce MultiSpace, a research effort which explores the role of the table as a central hub to support ad hoc collaboration in a multi-device environment. The table-centric approach offers new interaction techniques to provide egalitarian access and shared transport of data, supporting mobility and micromobility [11] of electronic content between tables and other devices. Our observations show how people use these techniques, and how tabletop technology can support and augment collaborative tasks.
multispace_enabling_electronic_document_micro-mobility_in_table-centric_multi-device_environments.pdf
Wu M, Shen C, Ryall K, Forlines C, Balakrishnan R. “Gesture Registration, Relaxation, and Reuse for Multi-Point Direct-Touch Surfaces”, in IEEE International Workshop on Horizontal Interactive Human-Computer Systems (TableTop). Adelaide, Australia ; 2006. Publisher's VersionAbstract
Freehand gestural interaction with direct-touch computation surfaces has been the focus of significant research activity recently. While many interesting gestural interaction techniques have been proposed, their design has been mostly ad-hoc and has not been presented within a constructive design framework. In this paper, we develop and articulate a set of design principles for constructing – in a systematic and extensible manner – multi-hand gestures on touch surfaces that can sense multiple points and shapes, and can also accommodate conventional point-based input. To illustrate the generality of these design principles, a set of bimanual continuous gestures that embody these principles are developed and explored within a prototype tabletop publishing application. We carried out a user evaluation to assess the usability of these gestures and use the results and observations to suggest future design guidelines.
gesture_registration.pdf
Hancock MS, Vernier FD, Wigdor D, Carpendale S, Shen C. “Rotation and Translation Mechanisms for Tabletop Interaction”, in IEEE International Workshop on Horizontal Interactive Human-Computer Systems (TableTop). Adelaide, Australia ; 2006. Publisher's VersionAbstract
A digital tabletop, such as the one shown in Figure 1, offers several advantages over other groupware form factors for collaborative applications. However, users of a tabletop system do not share a common perspective for the display of information: what is presented right-side-up to one participant is upsidedown for another. In this paper, we survey five different rotation and translation techniques for objects displayed on a direct-touch digital tabletop display. We analyze their suitability for interactive tabletops in light of their respective input and output degrees of freedom, as well as the precision and completeness provided by each. We describe various tradeoffs that arise when considering which, when and where each of these techniques might be most useful.
rotation_and_translation_mechanisms_for_tabletop_interaction.pdf
Wigdor D, Shen C, Forlines C, Balakrishnan R. “Effects of Display Position and Control Space Orientation on User Preference and Performance”. Proceedings of CHI. 2006.Abstract
In many environments, it is often the case that input is made to displays that are positioned non-traditionally relative to one or more users. This typically requires users to perform interaction tasks under transformed input-display spatial mappings, and the literature is unclear as to how such transformations affect performance. We present two experiments that explore the impact of display space position and input control space orientation on user’s subjective preference and objective performance in a docking task. Our results provide guidelines as to optimal display placement and control orientation in collaborative computing environments with one or more shared displays.
effects_of_display_position_and_control_space_orientation_on_user_preference_and_performance.pdf
Tse E, Shen C, Greenberg S, Forlines C. “Enabling Interaction with Single User Applications through Speech and Gestures on a Multi-User Tabletop”, in Advanced Visual Interfaces (AVI) International Working Conference. Venice, Italy ; 2006.Abstract
Co-located collaborators often work over physical tabletops with rich geospatial information. Previous research shows that people use gestures and speech as they interact with artefacts on the table and communicate with one another. With the advent of large multi-touch surfaces, developers are now applying this knowledge to create appropriate technical innovations in digital table design. Yet they are limited by the difficulty of building a truly useful collaborative application from the ground up. In this paper, we circumvent this difficulty by: (a) building a multimodal speech and gesture engine around the Diamond Touch multi-user surface, and (b) wrapping existing, widely-used off-the-shelf single-user interactive spatial applications with a multimodal interface created from this engine. Through case studies of two quite different geospatial systems – Google Earth and Warcraft III – we show the new functionalities, feasibility and limitations of leveraging such single-user applications within a multi user, multimodal tabletop. This research informs the design of future multimodal tabletop applications that can exploit single-user software conveniently available in the market. We also contribute (1) a set of technical and behavioural affordances of multimodal interaction on a tabletop, and (2) lessons learnt from the limitations of single user applications.
enabling_interaction_with_single_user_applications_through_speech_and_gestures_on_a_multi-user_tabletop.pdf
Wigdor D, Shen C, Forlines C, Balakrishnan R. “Table-Centric Interactive Spaces for Real-Time Collaboration: Solutions, Evaluation, and Application Scenarios”, in CollabTech 2006. Tsukuba, Japan ; 2006.Abstract
Tables have historically played a key role in many real-time collaborative environments, often referred to as “Operation Centres”. Today, these environments have been transformed by computational technology into spaces with large vertical displays surrounded by numerous desktop computers. Despite significant research activity in the area of tabletop computing, very little is known about how to best integrate a digital tabletop into these multi-surface environments. In this paper, we identify the unique characteristics of this problem space, and present the evaluation of a system proposed to demonstrate how an interactive tabletop can be used in a real-time operations centre to facilitate collaborative situation-assessment and decision-making.
table-centric_interactive_spaces.pdf
Shen C, Ryall K, Forlines C, Esenther A, Vernier FD, Everitt K, Wu M, Wigdor D, Morris MR, Hancock M, et al. “Informing the Design of Direct-Touch Tabletops”. IEEE Computer Graphics and Applications. 2006;(Special Issue). informing_the_design_of_direct-touch_tabletops.pdf
Wigdor D, Leigh D, Forlines C, Shen C, Shipman S, Barnwell J, Balakrishnan R. “Under the Table Interaction”. Proceedings of the 2006 ACM Conference on User Interface Software and Technology. 2006.Abstract
We explore the design space of a two-sided interactive touch table, designed to receive touch input from both the top and bottom surfaces of the table. By combining two registered touch surfaces, we are able to offer a new dimension of input for co-located collaborative groupware. This design accomplishes the goal of increasing the relative size of the input area of a touch table while maintaining its direct-touch input paradigm. We describe the interaction properties of this two-sided touch table, report the results of a controlled experiment examining the precision of user touches to the underside of the table, and a series of application scenarios we developed for use on inverted and two-sided tables. Finally, we present a list of design recommendations based on our experiences and observations with inverted and two-sided tables.
under_the_table_interaction.pdf
Forlines C, Esenther A, Shen C, Wigdor D, Ryall K. “Adapting a Single-Display, Single-User Geospatial Application for a Multi-Device, Multi-User Environment”. Proceedings of the 2006 ACM Conference on User Interface Software and Technology. 2006.Abstract
In this paper, we discuss our adaptation of a single-display, single-user commercial application for use in a multi-device, multi-user environment. We wrap Google Earth, a popular geospatial application, in a manner that allows for synchronized coordinated views among multiple instances running on different machines in the same co-located environment. The environment includes a touch-sensitive tabletop display, three vertical wall displays, and a TabletPC. A set of interaction techniques that allow a group to manage and exploit this collection of devices is presented.
multi-user_multi-display_interaction.pdf
Forlines C, Shen C, Wigdor D, Balakrishnan R. “Exploring the Effects of Group Size and Display Configuration on Visual Search”. Proceedings of the 2006 ACM Conference on Computer Supported Cooperative Work. 2006.Abstract
Visual search is the subject of countless psychology studies in which people search for target items within a scene. The bulk of this literature focuses on the individual with the goal of understanding the human perceptual system. In life, visual search is performed not only by individuals, but also by groups – a team of doctors may study an x-ray and a team of analysts may study a satellite photograph. In this paper, we examine the issues one should consider when searching as a group. We present the details of an experiment designed to investigate the impact of group size on visual search performance, and how different display configurations affected that performance. We asked individuals, pairs, and groups of four people to participate in a baggage screening task in which these teams searched simulated x-rays for prohibited items. Teams conducted these searches on single monitors, a row of four monitors, and on a single horizontal display. Our findings suggest that groups commit far fewer errors in visual search tasks, although they may perform slower than individuals under certain conditions. The interaction between group size and display configuration turned out to be an important factor as well.
exploring_the_effects_of_group_size_and_display_configuration_on_visual_search.pdf
2005
Everitt K, Shen C, Ryall K, Forlines C. “Modal Spaces: Spatial Multiplexing to Mediate Direct-Touch Input on Large Displays”. ACM CHI '05 Extended Abstracts on Human Factors in Computing Systems. 2005.Abstract
We present a new interaction technique for large direct-touch displays called Modal Spaces. Modal interfaces require the user to keep track of the state of the system. The Modal Spaces technique adds screen location aas an additional parameter of the interaction. Each modal region on the display supports a particular set of input actions and the visual background indicates the spaces´ use. This workbech approach exploits the larger form factor of display. Our spatial multiplexing of the display supports a document-centric paradigm (as opposed to application-centric), enabling input gesture reuse, while complementing and enhancing the current existing practices of modal interfaces. We present a proof-of-concept system and discuss porential applications, design issues, and future research directions.
modal_spaces.pdf

Pages