Tag Archives: wireless networks

Paper: Gradual Engagement between Digital Devices as a Function of Proximity: From Awareness to Progressive Reveal to Information Transfer

I collaborated on a nifty project with the fine folks from Saul Greenberg’s group at the University of Calgary exploring the emerging possibilities for devices to sense and respond to their digital ecology. When devices have fine-grained sensing of their spatial relationships to one another, as well as to the people in that space, it brings about new ways for users to interact with the resulting system of cooperating devices and displays.

This fine-grained sensing approach makes for an interesting contrast to what Nic Marquardt and I explored in GroupTogether, which intentionally took a more conservative approach towards the sensing infrastructure — with the idea in mind that sometimes, one can still do a lot with very little (sensing).

Taken together, the two papers nicely bracket some possibilities for the future of cross-device interactions and intelligent environments.

This work really underscores that we are still largely in the dark ages with regard to such possibilities for digital ecologies. As new sensors and sensing systems make this kind of rich awareness of the surround of devices and users possible, our devices, operating systems, and user experiences will grow to encompass the expanded horizons of these new possibilities as well.

The full citation and the link to our scientific paper are as follows:

Gradual Engagement with devices via proximity sensingMarquardt, N., Ballendat, T., Boring, S., Greenberg, S. and Hinckley, K., Gradual Engagement between Digital Devices as a Function of Proximity: From Awareness to Progressive Reveal to Information Transfer. In Proceedings of ACM Interactive Tabletops & Surfaces (ITS 2012). Boston, MA, USA, November 11-14. 10pp. [PDF] [video – MP4].

Watch the Gradual Engagement via Proximity video on YouTube

GroupTogether — Exploring the Future of a Society of Devices

My latest paper discussing the GroupTogether system just appeared at the 2012 ACM Symposium on User Interface Software & Technology in Cambridge, MA.

GroupTogether video available on YouTube

I’m excited about this work — it really looks hard at what some of the next steps in sensing systems might be, particularly when one starts considering how users can most effectively interact with one another in the context of the rapidly proliferating Society of Devices we are currently witnessing.

I think our paper on the GroupTogether system, in particular, does a really nice job of exploring this with strong theoretical foundations drawn from the sociological literature.

F-formations are small groups of people engaged in a joint activity.

F-formations are the various type of small groups that people form when engaged in a joint activity.

GroupTogether starts by considering the natural small-group behaviors adopted by people who come together to accomplish some joint activity.  These small groups can take a variety of distinctive forms, and are known collectively in the sociological literature as f-formations. Think of those distinctive circles of people that form spontaneously at parties: typically they are limited to a maximum of about 5 people, the orientation of the partipants clearly defines an area inside the group that is distinct from the rest of the environment outside the group, and there are fairly well established social protocols for people entering and leaving the group.

A small group of two users as sensed by GroupTogether's overhead Kinect depth-cameras

A small group of two users as sensed via GroupTogether’s overhead Kinect depth-cameras.

GroupTogether also senses the subtle orientation cues of how users handle and posture their tablet computers. These cues are known as micro-mobility, a communicative strategy that people often employ with physical paper documents, such as when a sales representative orients a document towards to to direct your attention and indicate that it is your turn to sign, for example.

Our system, then, is the first to put small-group f-formations, sensed via overhead Kinect depth-camera tracking, in play simultaneously with the micro-mobility of slate computers, sensed via embedded accelerometers and gyros.

The GroupTogether prototype sensing environment and set-up

GroupTogether uses f-formations to give meaning to the micro-mobility of slate computers. It understands which users have come together in a small group, and which users have not. So you can just tilt your tablet towards a couple of friends standing near you to share content, whereas another person who may be nearby but facing the other way — and thus clearly outside of the social circle of the small group — would not be privy to the transaction. Thus, the techniques lower the barriers to sharing information in small-group settings.

Check out the video to see what these techniques look like in action, as well as to see how the system also considers groupings of people close to situated displays such as electronic whiteboards.

The full text of our scientific paper on GroupTogether and the citation is also available.

My co-author Nic Marquardt was the first author and delivered the talk. Saul Greenberg of the University of Calgary also contributed many great insights to the paper.

Image credits: Nic Marquardt

Paper: Cross-Device Interaction via Micro-mobility and F-formations (“GroupTogether”)

GroupTogetherMarquardt, N., Hinckley, K., and Greenberg, S., Cross-Device Interaction via Micro-mobility and F-formations.  In ACM UIST 2012 Symposium on User Interface Software and Technology (UIST ’12). ACM, New York, NY, USA,  Cambridge, MA, Oct. 7-10, 2012, pp. (TBA). [PDF] [video – WMV]. Known as the GroupTogether system.

See also my post with some further perspective on the GroupTogether project.

Watch the GroupTogether video on YouTube

Journal Article: Synchronous Gestures in Multi-Display Environments

Synchronous Gestures in Multi-Display EnvironmentsRamos, G., Hinckley, K., Wilson, A., and Sarin, R., Synchronous Gestures in Multi-Display Environments, In Human–Computer Interaction, Special Issue: Ubiquitous Multi-Display Environments, Volume 24, Issue 1-2, 2009, pp. 117-169. [Author’s Manuscript PDF – not final proof]

Paper: Codex: a Dual-Screen Tablet Computer

The Codex dual-screen tablet computerHinckley, K., Dixon, M., Sarin, R., Guimbretiere, F., and Balakrishnan, R. 2009. Codex: a Dual-Screen Tablet Computer. In Proce. CHI 2009 Conf. on Human Factors in Computing Systems, Boston, MA, pp. 1933-1942. [PDF] [video .MOV] [OfficeLabs Thought Leadership Award]

Watch the Codex video on YouTube

Unpublished Manuscript: BlueRendezvous: Simple Pairing for Mobile Devices

BlueRendezvous-- Simple Pairing for Smartphones Sarin, R., Hinckley, K., BlueRendezvous: Simple Pairing for Mobile Devices. Unpublished Manuscript, Jan. 26, 2006, 9 pp. White Paper describing the BlueRendezvous demonstration, which we never published as a stand-alone paper. Parts of this work appeared in a subsequent journal article. [PDF]

Video Abstract: Stitching: Connecting Wireless Mobile Devices with Pen Gestures

Stitching Pen GesturesHinckley, K., Ramos, G., Guimbretiere, F., Baudisch, P., Smith, M., Stitching: Connecting Wireless Mobile Devices with Pen Gestures, Video Abstract: ACM 2004 Conf. on Computer Supported Cooperative Work (CSCW 2004 formal video program), Chicago, IL, Nov. 6-10, 2004. [PDF] [video .MOV]

Paper: The NearMe Wireless Proximity Server

NearMe Wireless Proximity ServerKrumm, J., Hinckley, K., The NearMe Wireless Proximity Server. In Proceedings of Ubicomp 2004: Ubiquitous Computing, September 7-10, 2004, Nottingham, England, pp. 283-300. Published by Springer. [PDF] [John Krumm’s NearMe Project Page].

Paper: Stitching: Pen Gestures That Span Multiple Displays

Stitching Pen GesturesHinckley, K., Ramos, G., Guimbretiere, F., Baudisch, P., and Smith, M. Stitching: pen gestures that span multiple displays. In Proc. AVI 2004 Working Conference on Advanced Visual Interfaces, Gallipoli, Italy, pp. 23-31. [PDF] [video mpeg] [bonus video: stitching for mobiles]

Watch Stitching video on YouTube

Paper: Distributed and Local Sensing Techniques for Face-to-face Collaboration

Sensing Techniques for Face-to-Face CollaborationHinckley, K., Distributed and Local Sensing Techniques for Face-to-face Collaboration, In ICMI/PUI’03 Fifth International Conference on
Multimodal Interfaces
, Vancouver, British Columbia, Canada, Nov. 05 – 07, 2003, pp.
81-84. 
[PDF] [Footage of the techniques included in this video .MPEG]

Watch on YouTube