Tag Archives: CSCW (computer-supported cooperative work)

Paper: SurfaceFleet: Exploring Distributed Interactions Unbounded from Device, Application, User, and Time

SurfaceFleet is a multi-year project at Microsoft Research contributing a system and toolkit that uses resilient and performant distributed programming techniques to explore cross-device user experiences.

With appropriate design, these technologies afford mobility of user activity unbounded by device, application, user, and time.

The vision of the project is to enable a future where an ecosystem of technologies seamlessly transition user activity from one place to another — whether that “place” takes the form of a literal location, a different device form-factor, the presence of a collaborator, or the availability of the information needed to complete a particular task.

The goal is a Society of Technologies that fosters meaningful relationships amongst the members of this society, rather than any particular device.

This engenders mobility of user activity in a way that takes advantage of recent advances in networking and storage, and that supports consumer trends of multiple device usage and distributed workflows—not the least of which is the massive global shift towards remote work (bridging multiple users, on multiple devices, across local and remote locations).

Surface-Fleet-logo-2021-fullres


In this particular paper, published at UIST 2020, we explored the trend for knowledge work to increasingly span multiple computing surfaces.

Yet in status quo user experiences, content as well as tools, behaviors, and workflows are largely bound to the current device—running the current application, for the current user, and at the current moment in time.

This work is where we first introduce SurfaceFleet as a system and toolkit founded on resilient distributed programming techniques. We then leverage this toolkit to explore a range of cross-device interactions that are unbounded in these four dimensions of device, application, user, and time.

As a reference implementation, we describe an interface built using Surface Fleet that employs lightweight, semi-transparent UI elements known as Applets.

Applets appear always-on-top of the operating system, application windows, and (conceptually) above the device itself. But all connections and synchronized data are virtualized and made resilient through the cloud.

For example, a sharing Applet known as a Portfolio allows a user to drag and drop unbound Interaction Promises into a document. Such promises can then be fulfilled with content asynchronously, at a later time (or multiple times), from another device, and by the same or a different user.

SurfaceFleet-UIST-2020-Applets

This work leans heavily into present computing trends suggesting that cross-device and distributed systems will have major impact on HCI going forward:

With Moore’s Law at an end, yet networking and storage exhibiting exponential gains, the future appears to favor systems that emphasize seamless mobility of data, rather than using any particular CPU.

At the same time, the ubiquity of connected and inter-dependent devices, of many different form factors, hints at a Society of Technologies that establishes meaningful relationships amongst the members of this society.

This favors the mobility of user activity, rather than using any particular device, to achieve a future where HCI can meet full human potential.

Overall, SurfaceFleet advances this perspective through a concrete system implementation as well as our unifying conceptual contribution that frames mobility as transitions in place in terms of device, application, user, and time—and the resulting exploration of techniques that simultaneously bridge all four of these gaps.

Watch SurfaceFleet video on YouTube


SurfaceFleet-UIST-2020-thumbFrederik Brudy*, David Ledo*, Michel Pahud, Nathalie Henry Riche, Christian Holz, Anand Waghmare, Hemant Surale, Marcus Peinado, Xiaokuan Zhang, Shannon Joyner, Badrish Chandramouli, Umar Farooq Minhas, Jonathan Goldstein, Bill Buxton, and Ken Hinckley. SurfaceFleet: Exploring Distributed Interactions Unbounded from Device, Application, User, and Time.  In Proceedings of the 33rd Annual ACM Symposium on User Interface Software and Technology (UIST ’20). ACM, New York, NY, USA. Virtual Event, USA, October 20-23, 2020, pp. 7-21. https://doi.org/10.1145/3379337.3415874
* The first two authors contributed equally to this work.

[PDF] [30-second preview – mp4] [Full video – mp4] [Supplemental video “How to” – mp4 | Supplemental video on YouTube].

[Frederik Brudy and David Ledo’s SurfaceFleet virtual talk from UIST 2020 on YouTube]

Paper: Gradual Engagement between Digital Devices as a Function of Proximity: From Awareness to Progressive Reveal to Information Transfer

I collaborated on a nifty project with the fine folks from Saul Greenberg’s group at the University of Calgary exploring the emerging possibilities for devices to sense and respond to their digital ecology. When devices have fine-grained sensing of their spatial relationships to one another, as well as to the people in that space, it brings about new ways for users to interact with the resulting system of cooperating devices and displays.

This fine-grained sensing approach makes for an interesting contrast to what Nic Marquardt and I explored in GroupTogether, which intentionally took a more conservative approach towards the sensing infrastructure — with the idea in mind that sometimes, one can still do a lot with very little (sensing).

Taken together, the two papers nicely bracket some possibilities for the future of cross-device interactions and intelligent environments.

This work really underscores that we are still largely in the dark ages with regard to such possibilities for digital ecologies. As new sensors and sensing systems make this kind of rich awareness of the surround of devices and users possible, our devices, operating systems, and user experiences will grow to encompass the expanded horizons of these new possibilities as well.

The full citation and the link to our scientific paper are as follows:

Gradual Engagement with devices via proximity sensingMarquardt, N., Ballendat, T., Boring, S., Greenberg, S. and Hinckley, K., Gradual Engagement between Digital Devices as a Function of Proximity: From Awareness to Progressive Reveal to Information Transfer. In Proceedings of ACM Interactive Tabletops & Surfaces (ITS 2012). Boston, MA, USA, November 11-14. 10pp. [PDF] [video – MP4].

Watch the Gradual Engagement via Proximity video on YouTube

GroupTogether — Exploring the Future of a Society of Devices

My latest paper discussing the GroupTogether system just appeared at the 2012 ACM Symposium on User Interface Software & Technology in Cambridge, MA.

GroupTogether video available on YouTube

I’m excited about this work — it really looks hard at what some of the next steps in sensing systems might be, particularly when one starts considering how users can most effectively interact with one another in the context of the rapidly proliferating Society of Devices we are currently witnessing.

I think our paper on the GroupTogether system, in particular, does a really nice job of exploring this with strong theoretical foundations drawn from the sociological literature.

F-formations are small groups of people engaged in a joint activity.

F-formations are the various type of small groups that people form when engaged in a joint activity.

GroupTogether starts by considering the natural small-group behaviors adopted by people who come together to accomplish some joint activity.  These small groups can take a variety of distinctive forms, and are known collectively in the sociological literature as f-formations. Think of those distinctive circles of people that form spontaneously at parties: typically they are limited to a maximum of about 5 people, the orientation of the partipants clearly defines an area inside the group that is distinct from the rest of the environment outside the group, and there are fairly well established social protocols for people entering and leaving the group.

A small group of two users as sensed by GroupTogether's overhead Kinect depth-cameras

A small group of two users as sensed via GroupTogether’s overhead Kinect depth-cameras.

GroupTogether also senses the subtle orientation cues of how users handle and posture their tablet computers. These cues are known as micro-mobility, a communicative strategy that people often employ with physical paper documents, such as when a sales representative orients a document towards to to direct your attention and indicate that it is your turn to sign, for example.

Our system, then, is the first to put small-group f-formations, sensed via overhead Kinect depth-camera tracking, in play simultaneously with the micro-mobility of slate computers, sensed via embedded accelerometers and gyros.

The GroupTogether prototype sensing environment and set-up

GroupTogether uses f-formations to give meaning to the micro-mobility of slate computers. It understands which users have come together in a small group, and which users have not. So you can just tilt your tablet towards a couple of friends standing near you to share content, whereas another person who may be nearby but facing the other way — and thus clearly outside of the social circle of the small group — would not be privy to the transaction. Thus, the techniques lower the barriers to sharing information in small-group settings.

Check out the video to see what these techniques look like in action, as well as to see how the system also considers groupings of people close to situated displays such as electronic whiteboards.

The full text of our scientific paper on GroupTogether and the citation is also available.

My co-author Nic Marquardt was the first author and delivered the talk. Saul Greenberg of the University of Calgary also contributed many great insights to the paper.

Image credits: Nic Marquardt

Paper: Cross-Device Interaction via Micro-mobility and F-formations (“GroupTogether”)

GroupTogetherMarquardt, N., Hinckley, K., and Greenberg, S., Cross-Device Interaction via Micro-mobility and F-formations.  In ACM UIST 2012 Symposium on User Interface Software and Technology (UIST ’12). ACM, New York, NY, USA,  Cambridge, MA, Oct. 7-10, 2012, pp. (TBA). [PDF] [video – WMV]. Known as the GroupTogether system.

See also my post with some further perspective on the GroupTogether project.

Watch the GroupTogether video on YouTube

Paper: CodeSpace: Touch + Air Gesture Hybrid Interactions for Supporting Developer Meetings

CodeSpace systemBragdon, A., DeLine, R., Hinckley, K., and Morris, M. R., Code space: Touch + Air Gesture Hybrid Interactions for Supporting Developer Meetings.  In Proc. ACM International Conference on Interactive Tabletops and Surfaces (ITS ’11). ACM, New York, NY, USA,  Kobe, Japan, November 13-16, 2011, pp. 212-221. [PDF] [video – WMV]. As featured on Engadget and many other online forums.

Watch CodeSpace video on YouTube

Journal Article: Synchronous Gestures in Multi-Display Environments

Synchronous Gestures in Multi-Display EnvironmentsRamos, G., Hinckley, K., Wilson, A., and Sarin, R., Synchronous Gestures in Multi-Display Environments, In Human–Computer Interaction, Special Issue: Ubiquitous Multi-Display Environments, Volume 24, Issue 1-2, 2009, pp. 117-169. [Author’s Manuscript PDF – not final proof]

Paper: Codex: a Dual-Screen Tablet Computer

The Codex dual-screen tablet computerHinckley, K., Dixon, M., Sarin, R., Guimbretiere, F., and Balakrishnan, R. 2009. Codex: a Dual-Screen Tablet Computer. In Proce. CHI 2009 Conf. on Human Factors in Computing Systems, Boston, MA, pp. 1933-1942. [PDF] [video .MOV] [OfficeLabs Thought Leadership Award]

Watch the Codex video on YouTube

Paper: Distributed and Local Sensing Techniques for Face-to-face Collaboration

Sensing Techniques for Face-to-Face CollaborationHinckley, K., Distributed and Local Sensing Techniques for Face-to-face Collaboration, In ICMI/PUI’03 Fifth International Conference on
Multimodal Interfaces
, Vancouver, British Columbia, Canada, Nov. 05 – 07, 2003, pp.
81-84. 
[PDF] [Footage of the techniques included in this video .MPEG]

Watch on YouTube

Paper: Synchronous Gestures for Multiple Persons and Computers

Synchronous GesturesHinckley, K. Synchronous Gestures for Multiple Persons and Computers. In Proc. UIST 2003 Symp. on User interface Software and Technology, Vancouver, Canada, pp. 149-158. [PDF] [video mpeg]

A video of this system (but not the original video that accompanied the paper) is available on YouTube:

Watch on YouTube

Video Abstract: Bumping Objects Together as a Semantically Rich Way of Forming Connections between Ubiquitous

Bumping Devices TogetherHinckley, K., Bumping Objects Together as a Semantically Rich Way of Forming Connections between Ubiquitous Devices. UbiComp 2003 Formal Video Program, Seattle, WA, Oct 12-15, 2003. [Abstract – PDF] [video .MPEG]

Watch on YouTube