Category Archives: cross-device interaction

Paper: The “Seen but Unnoticed” Vocabulary of Natural Touch: Revolutionizing Direct Interaction with Our Devices and One Another

In this Vision (for the UIST 2021 Symposium on User Interface Software & Technology), I argue that “touch” input and interaction remains in its infancy when viewed in context of the seen but unnoticed vocabulary of natural human behaviors, activity, and environments that surround direct interaction with displays.

Unlike status-quo touch interaction — a shadowplay of fingers on a single screen — I argue that our perspective of direct interaction should encompass the full rich context of individual use (whether via touch, sensors, or in combination with other modalities), as well as collaborative activity where people are engaged in local (co-located), remote (tele-present), and hybrid work.

We can further view touch through the lens of the “Society of Devices,” where each person’s activities span many complementary, oft-distinct devices that offer the right task affordance (input modality, screen size, aspect ratio, or simply a distinct surface with dedicated purpose) at the right place and time.

While many hints of this vision already exist in the literature, I speculate that a comprehensive program of research to systematically inventory, sense, and design interactions around such human behaviors and activities—and that fully embrace touch as a multi-modal, multi-sensor, multi-user, and multi-device construct—could revolutionize both individual and collaborative interaction with technology.


For the remote presentation, instead of a normal academic talk, I recruited my friend and colleague Nicolai Marquardt to have a 15-minute conversation with me about the vision and some of its implications:

Watch the “Seen but Unnoticed” UIST 2021 Vision presentation video on YouTube


Several aspects of this vision paper relate to a larger Microsoft Research project known as SurfaceFleet that explores the distributed systems and user experience implications of a “Society of Devices” in the New Future of Work.


Seen-but-Unnoticed-thumbKen Hinckley. The “Seen but Unnoticed” Vocabulary of Natural Touch: Revolutionizing Direct Interaction with Our Devices and One Another. UIST Vision presented at The 34th Annual ACM Symposium on User Interface Software and Technology (UIST ’21). Non-archival publication, 5 pages. Virtual Event, USA, Oct 10-14, 2021.
https://arxiv.org/abs/2310.03958

[PDF] [captioned presentation video – mp4]

Paper: SurfaceFleet: Exploring Distributed Interactions Unbounded from Device, Application, User, and Time

SurfaceFleet is a multi-year project at Microsoft Research contributing a system and toolkit that uses resilient and performant distributed programming techniques to explore cross-device user experiences.

With appropriate design, these technologies afford mobility of user activity unbounded by device, application, user, and time.

The vision of the project is to enable a future where an ecosystem of technologies seamlessly transition user activity from one place to another — whether that “place” takes the form of a literal location, a different device form-factor, the presence of a collaborator, or the availability of the information needed to complete a particular task.

The goal is a Society of Technologies that fosters meaningful relationships amongst the members of this society, rather than any particular device.

This engenders mobility of user activity in a way that takes advantage of recent advances in networking and storage, and that supports consumer trends of multiple device usage and distributed workflows—not the least of which is the massive global shift towards remote work (bridging multiple users, on multiple devices, across local and remote locations).

Surface-Fleet-logo-2021-fullres


In this particular paper, published at UIST 2020, we explored the trend for knowledge work to increasingly span multiple computing surfaces.

Yet in status quo user experiences, content as well as tools, behaviors, and workflows are largely bound to the current device—running the current application, for the current user, and at the current moment in time.

This work is where we first introduce SurfaceFleet as a system and toolkit founded on resilient distributed programming techniques. We then leverage this toolkit to explore a range of cross-device interactions that are unbounded in these four dimensions of device, application, user, and time.

As a reference implementation, we describe an interface built using Surface Fleet that employs lightweight, semi-transparent UI elements known as Applets.

Applets appear always-on-top of the operating system, application windows, and (conceptually) above the device itself. But all connections and synchronized data are virtualized and made resilient through the cloud.

For example, a sharing Applet known as a Portfolio allows a user to drag and drop unbound Interaction Promises into a document. Such promises can then be fulfilled with content asynchronously, at a later time (or multiple times), from another device, and by the same or a different user.

SurfaceFleet-UIST-2020-Applets

This work leans heavily into present computing trends suggesting that cross-device and distributed systems will have major impact on HCI going forward:

With Moore’s Law at an end, yet networking and storage exhibiting exponential gains, the future appears to favor systems that emphasize seamless mobility of data, rather than using any particular CPU.

At the same time, the ubiquity of connected and inter-dependent devices, of many different form factors, hints at a Society of Technologies that establishes meaningful relationships amongst the members of this society.

This favors the mobility of user activity, rather than using any particular device, to achieve a future where HCI can meet full human potential.

Overall, SurfaceFleet advances this perspective through a concrete system implementation as well as our unifying conceptual contribution that frames mobility as transitions in place in terms of device, application, user, and time—and the resulting exploration of techniques that simultaneously bridge all four of these gaps.

Watch SurfaceFleet video on YouTube


SurfaceFleet-UIST-2020-thumbFrederik Brudy*, David Ledo*, Michel Pahud, Nathalie Henry Riche, Christian Holz, Anand Waghmare, Hemant Surale, Marcus Peinado, Xiaokuan Zhang, Shannon Joyner, Badrish Chandramouli, Umar Farooq Minhas, Jonathan Goldstein, Bill Buxton, and Ken Hinckley. SurfaceFleet: Exploring Distributed Interactions Unbounded from Device, Application, User, and Time.  In Proceedings of the 33rd Annual ACM Symposium on User Interface Software and Technology (UIST ’20). ACM, New York, NY, USA. Virtual Event, USA, October 20-23, 2020, pp. 7-21. https://doi.org/10.1145/3379337.3415874
* The first two authors contributed equally to this work.

[PDF] [30-second preview – mp4] [Full video – mp4] [Supplemental video “How to” – mp4 | Supplemental video on YouTube].

[Frederik Brudy and David Ledo’s SurfaceFleet virtual talk from UIST 2020 on YouTube]

Paper: WritLarge: Ink Unleashed by Unified Scope, Action, & Zoom

Electronic whiteboards remain surprisingly difficult to use in the context of creativity support and design.

A key problem is that once a designer places strokes and reference images on a canvas, actually doing anything useful with key parts of that content involves numerous steps.

Hence, with digital ink, scope—that is, selection of content—is a central concern, yet current approaches often require encircling ink with a lengthy lasso, if not switching modes via round-trips to the far-off edges of the display.

Only then can the user take action, such as to copy, refine, or re-interpret their informal work-in-progress.

Such is the stilted nature of selection and action in the digital world.

But it need not be so.

By contrast, consider an everyday manual task such as sandpapering a piece of woodwork to hew off its rough edges. Here, we use our hands to grasp and bring to the fore—that is, select—the portion of the work-object—the wood—that we want to refine.

And because we are working with a tool—the sandpaper—the hand employed for this ‘selection’ sub-task is typically the non-preferred one, which skillfully manipulates the frame-of-reference for the subsequent ‘action’ of sanding, a complementary sub-task articulated by the preferred hand.

Therefore, in contrast to the disjoint subtasks foisted on us by most interactions with computers, the above example shows how complementary manual activities lend a sense of flow that “chunks” selection and action into a continuous selection-action phrase. By manipulating the workspace, the off-hand shifts the context of the actions to be applied, while the preferred hand brings different tools to bear—such as sandpaper, file, or chisel—as necessary.

The main goal of the WritLarge project, then, is to demonstrate similar continuity of action for electronic whiteboards. This motivated free-flowing, close-at-hand techniques to afford unification of selection and action via bimanual pen+touch interaction.

WriteLarge-hero-figure

Accordingly, we designed WritLarge so that user can simply gesture as follows:

With the thumb and forefinger of the non-preferred hand, just frame a portion of the canvas.

And, unlike many other approaches to “handwriting recognition,” this approach to selecting key portions of an electronic whiteboard leaves the user in complete control of what gets recognized—as well as when recognition occurs—so as not to break the flow of creative work.

Indeed, building on this foundation, we designed ways to shift between flexible representations of freeform content by simply moving the pen along semantic, structural, and temporal axes of movement.

See our demo reel below for some jaw-dropping demonstrations of the possibilities for digital ink opened up by this approach.

Watch WritLarge: Ink Unleashed by Unified Scope, Action, and Zoom video on YouTube


WritLarge-CHI-2017-thumbHaijun Xia, Ken Hinckley, Michel Pahud, Xioa Tu, and Bill Buxton. 2017. WritLarge: Ink Unleashed by Unified Scope, Action, and Zoom. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI’17). ACM, New York, NY, USA, pp. 3227-3240. Denver, Colorado, United States, May 6-11, 2017. Honorable Mention Award (top 5% of papers).
https://doi.org/10.1145/3025453.3025664

[PDF] [30 second preview – mp4 | YouTube] [Full video – mp4]

Paper: As We May Ink? Learning from Everyday Analog Pen Use to Improve Digital Ink Experiences

This work sheds light on gaps and discrepancies between the experiences afforded by analog pens and their digital counterparts.

Despite the long history (and recent renaissance) of digital pens, the literature still lacks a comprehensive survey of what types of marks people make and what motivates them to use ink—both analog and digital—in daily life.

As-We-May-Ink-fullsize

To capture the diversity of inking behaviors and tease out the unique affordances of pen-and ink, we conducted a diary study with 26 participants from diverse backgrounds.

From analysis of 493 diary entries we identified 8 analog pen-and-ink activities, and 9 affordances of pens. We contextualized and contrasted these findings using a survey with 1,633 respondents and a follow-up diary study with 30 participants, observing digital pens.

Our analysis revealed many gaps and research opportunities based on pen affordances not yet fully explored in the literature.


As-We-May-Ink-CHI-2017-thumbYann Riche, Nathalie Henry Rich, Ken Hinckley, Sarah Fuelling, Sarah Williams, and Sheri Panabaker. 2017. As We May Ink? Learning from Everyday Analog Pen Use to Improve Digital Ink Experiences. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI’17). ACM, New York, NY, USA, pp. 3241-3253. Denver, Colorado, United States, May 6-11, 2017.
https://doi.org/10.1145/3025453.3025716

[PDF] [CHI 2017 Talk Slides (PowerPoint)]

Paper: Wearables as Context for Guiard-abiding Bimanual Touch

This particular paper has a rather academic-sounding title, but at its heart it makes a very simple and interesting observation regarding touch that any user of touch-screen technology can perhaps appreciate.

The irony is this: when interaction designers talk about “natural” interaction, they often have touch input in mind. And so people tend to take that for granted. What could be simpler than placing a finger — or with the modern miracle of multi-touch, multiple fingers — on a display?

And indeed, an entire industry of devices and form-factors — everything from phones, tablets, drafting-tables, all the way up to large wall displays — has arisen from this assumption.

Yet, if we unpack “touch” as it’s currently realized on most touchscreens, we can see that it remains very much a poor man’s version of natural human touch.

For example, on a large electronic-whiteboard such as the 84″ Surface Hub, multiple people can work upon the display at the same time. And it feels natural to employ both hands — as one often does in a wide assortment of everyday manual activities, such as indicating a point on a whiteboard with your off-hand as you emphasize the same point with the marker (or electronic pen).

Yet much of this richness — obvious to anyone observing a colleague at a whiteboard — represents context that is completely lost with “touch” as manifest in the vast majority of existing touch-screen devices.

For example:

  • Who is touching the display?
  • Are they touching the display with one hand, or two?
  • And if two hands, which of the multiple touch-events generated come from the right hand, and which come from the left?

Well, when dealing with input to computers, the all-too-common answer from the interaction designer is a shrug, a mumbled “who the heck knows,” and a litany of assumptions built into the user interface to try and paper over the resulting ambiguities, especially when the two factors (which user, and which hand) compound one another.

The result is that such issues tend to get swept under the rug, and hardly anybody ever mentions them.

But the first step towards a solution is recognizing that we have a problem.

This paper explores the implications of one particular solution that we have prototyped, namely leveraging wearable devices on the user’s body as sensors that can augment the richness of touch events.

A fitness band worn on the non-preferred hand, for example, can sense the impulse resulting from making finger-contact with a display through its embedded motion sensors (accelerometers and gyros). If the fitness band and the display exchange information and id’s, the touch-event generated can then be associated with the left hand of a particular user. The inputs of multiple users instrumented in this manner can then be separated from one another, as well, and used as a lightweight form of authentication.

That then explains the “wearable” part of “Wearables as Context for Guiard-abiding Bimanual Touch,” the title of my most recent paper, but what the heck does “Guiard-abiding” mean?

Well, this is a reference to classic work by a research colleague, Yves Guiard, who is famous for a 1987 paper in which he made a number of key observations regarding how people use their hands — both of them — in everyday manual tasks.

Particularly, in a skilled manipulative task such as writing on a piece of paper, Yves pointed out (assuming a right-handed individual) three general principles:

  • Left hand precedence: The action of the left hand precedes the action of the right; the non-preferred hand first positions and orients the piece of paper, and only then does the pen (held in the preferred hand, of course) begin to write.
  • Differentiation in scale: The action of the left hand tends to occur at a larger temporal and spatial scale of motion; the positioning (and re-positioning) of the paper tends to be infrequent and relatively coarse compared to the high-frequency, precise motions of the pen in the preferred hand.
  • Right-to-Left Spatial Reference: The left hand sets a frame of reference for the action of the right; the left hand defines the position and orientation of the work-space into which the preferred hand inserts its contributions, in this example via the manipulation of a hand-held implement — the pen.

Well, as it turns out these three principles are very deep and general, and they can yield great insight into how to design interactions that fully take advantage of people’s everyday skills for two-handed (“bimanual”) manipulation — another aspect of “touch” that interaction designers have yet to fully leverage for natural interaction with computers.

This paper is a long way from a complete solution to the paucity of modern touch-screens but hopefully by pointing out the problem and illustrating some consequences of augmenting touch with additional context (whether provided through wearables or other means), this work can lead to more truly “natural” touch interaction — allowing for simultaneous interaction by multiple users, both of whom can make full and complementary use of their hard-won manual skill with both hands — in the near future.


Wearables (fitness band and ring) provide missing context (who touches, and with what hand) for direct-touch bimanual interactions.Andrew M. Webb, Michel Pahud, Ken Hinckley, and Bill Buxton. 2016. Wearables as Context for Guiard-abiding Bimanual Touch. In Proceedings of the 29th Annual ACM Symposium on User Interface Software and Technology (UIST ’16). ACM, New York, NY, USA, 287-300. Tokyo, Japan, Oct. 16-19, 2016. https://doi.org/10.1145/2984511.2984564
[PDF] [Talk slides PDF] [30 second preview on YouTube] [Full video – MP4]

Paper: Sensing Tablet Grasp + Micro-mobility for Active Reading

Lately I have been thinking about touch:

In the tablet-computer sense of the word.

To most people, this means the touchscreen. The intentional pokes and swipes and pinching gestures we would use to interact with a display.

But not to me.

Touch goes far beyond that.

Look at people’s natural behavior. When they refer to a book, or pass a document to a collaborator, there are two interesting behaviors that characterize the activity.

What I call the seen but unnoticed:

Simple habits and social cues, there all the time, but which fall below our conscious attention — if they are even noticed at all.

By way of example, let’s say we’re observing someone handle a magazine.

First, the person has to grasp the magazine. Seems obvious, but easy to overlook — and perhaps vital to understand. Although grasp typically doesn’t involve contact of the fingers with the touchscreen, this is a form of ‘touch’ nonetheless, even if it is one that traditionally hasn’t been sensed by computers.

Grasp reveals a lot about the intended use, whether the person might be preparing to pick up the magazine or pass it off, or perhaps settling down for a deep and immersive engagement with the material.

Second, as an inevitable consequence of grasping the magazine, it must move. Again, at first blush this seems obvious. But these movements may be overt, or they may be quite subtle. And to a keen eye — or an astute sensing system — they are a natural consequence of grasp, and indeed are what give grasp its meaning.

In this way, sensing grasp informs the detection of movements.

And, coming full circle, the movements thus detected enrich what we can glean from grasp as well.

Yet, this interplay of grasp and movement has rarely been recognized, much less actively sensed and used to enrich and inform interaction with tablet computers.

And this feeds back into a larger point that I have often found myself trying to make lately, namely that touch is about far more than interaction with the touch-screen alone.

If we want to really understand touch (as well as its future as a technology) then we need to deeply understand these other modalities — grasp and movement, and perhaps many more — and thereby draw out the full naturalness and expressivity of interaction with tablets (and mobile phones, and e-readers, and wearables, and many dreamed-of form-factors perhaps yet to come).

My latest publication looks into all of these questions, particularly as they pertain to reading electronic documents on tablets:

Watch Sensing Tablet Grasp + Micro-mobility for Active Reading video on YouTube

We constructed a tablet (albeit a green metallic beast of one at present) that can detect natural grips along its edges and on the entire back surface of the device. And with a full complement of inertial motion sensors, as well. This image shows the grip-sensing (back) side of our technological monstrosity:

Grip Sensing Tablet Hardware

But this set-up allowed us to explore ways of combining grip and subtle motion (what has sometimes been termed micro-mobility in the literature), resulting in the following techniques (among a number of others):

A Single User ENGAGING with a Single Device

Some of these techniques address the experience of an individual engaging with their own reading material.

For example, you can hold a bookmark with your thumb (much as you can keep your finger on a page in physical book) and then tip the device. This flips back to the page that you’re holding:

Tip-to-Flip-x715

This ‘Tip-to-Flip’ interaction  involves both the grip and the movement of the device and results in a fairly natural interaction that builds on a familiar habit from everyday experience with physical documents.

Another one we experimented with was a very subtle interaction that mimics holding a document and angling it up to inspect it more closely. When we sense this, the tablet zooms in slightly on the page, while removing all peripheral distractions such as menu-bars and icons:

Immersive Reading mode through grip sensing

This immerses the reader in the content, rather than the iconographic gewgaws which typically border the screen of an application as if to announce, “This is a computer!”

Multiple Users Collaborating around a Single Device

Another set of techniques we explored looked at how people pass devices to one another.

In everyday experience, passing a paper document to a collaborator is a very natural — and different — form of “sharing,” as compared to the oft-frustrating electronic equivalents we have at our disposal.

Likewise, computers should be able to sense and recognize such gestures in the real world, and use them to bring some of the socially and situationally appropriate sharing that they afford to the world of electronic documents.

We explored one such technique that automatically sets up a guest profile when you hand a tablet (displaying a specific document) to another user:

Face-to-Face-Handoff-x715

The other user can then read and mark-up that document, but he is not the beneficiary of a permanent electronic copy of it (as would be the case if you emailed him an attachment), nor is he permitted to navigate to other areas or look at other files on your tablet.

You’ve physically passed him the electronic document, and all he can do is look at it and mark it up with a pen.

Not unlike the semantics — long absent and sorely missed in computing — of a simple a piece of paper.

A Single User Working With Multiple Devices

A final area we looked at considers what happens when people work across multiple tablets.

We already live in a world where people own and use multiple devices, often side-by-side, yet our devices typically have little or no awareness of one another.

But contrast this to the messy state of people’s physical desks, with documents strewn all over. People often place documents side-by-side as a lightweight and informal way of organization, and might dexterously pick one up or hold it at the ready for quick reference when engaged in an intellectually demanding task.

Again, missing from the world of the tablet computer.

But by sensing which tablets you hold, or pick up, our system allows people to quickly refer to and cross-reference content across federations of such devices.

While the “Internet of Things” may be all the rage these days among the avant-garde of computing, such federations remain uncommon and in our view represent the future of a ‘Society of Devices’ that can recognize and interact with one another, all while respecting social mores, not the least of which are the subtle “seen but unnoticed” social cues afforded by grasping, moving, and orienting our devices.

Fine-Grained-Reference-x715

Closing ThoughtS:

An ExpanDED Perspective OF ‘TOUCH’

The examples above represent just a few simple steps. Much more can, and should, be done to fully explore and vet these directions.

But by viewing touch as far more than simple contact of the fingers with a grubby touchscreen — and expanding our view to consider grasp, movement of the device, and perhaps other qualities of the interaction that could be sensed in the future as well — our work hints at a far wider perspective.

A perspective teeming with the possibilities that would be raised by a society of mobile appliances with rich sensing capabilities, potentially leading us to far more natural, more expressive, and more creative ways of engaging in the knowledge work of the future.


Sensing-Tablet-Grasp-Micro-Mobility-UIST-2015-thumbDongwook Yoon, Ken Hinckley, Hrvoje Benko, François Guimbretière, Pourang Irani, Michel Pahud, and Marcel Gavriliu. 2015. Sensing Tablet Grasp + Micro-mobility for Active Reading. In Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology (UIST ’15). ACM, New York, NY, USA, 477-487. Charlotte, NC, Nov. 8-11, 2015. http://dx.doi.org/10.1145/2807442.2807510
[PDF] [Talk slides – PowerPoint] [30 second preview – mp4] [Full video – mp4 | YouTube]

Book Chapter: Input/Output Devices and Interaction Techniques, Third Edition

Thumbnail for Computing Handbook (3rd Edition)Ken Hinckley, Robert J.K. Jacob, R., Colin Ware, Jacob O. Wobbrock, and Daniel Wigdor. Input/Output Devices and Interaction Techniques. Appears as Chapter 21 in The Computing Handbook, Third Edition: Two-Volume Set, ed. by Tucker, A., Gonzalez, T., Topi, H., and Diaz-Herrera, J. Published by Chapman and Hall/CRC (Taylor & Francis), May 13, 2014.  ISBN 9781439898444. [PDF – Author’s Draft – may contain discrepancies]

Paper: Gradual Engagement between Digital Devices as a Function of Proximity: From Awareness to Progressive Reveal to Information Transfer

I collaborated on a nifty project with the fine folks from Saul Greenberg’s group at the University of Calgary exploring the emerging possibilities for devices to sense and respond to their digital ecology. When devices have fine-grained sensing of their spatial relationships to one another, as well as to the people in that space, it brings about new ways for users to interact with the resulting system of cooperating devices and displays.

This fine-grained sensing approach makes for an interesting contrast to what Nic Marquardt and I explored in GroupTogether, which intentionally took a more conservative approach towards the sensing infrastructure — with the idea in mind that sometimes, one can still do a lot with very little (sensing).

Taken together, the two papers nicely bracket some possibilities for the future of cross-device interactions and intelligent environments.

This work really underscores that we are still largely in the dark ages with regard to such possibilities for digital ecologies. As new sensors and sensing systems make this kind of rich awareness of the surround of devices and users possible, our devices, operating systems, and user experiences will grow to encompass the expanded horizons of these new possibilities as well.

The full citation and the link to our scientific paper are as follows:

Gradual Engagement with devices via proximity sensingMarquardt, N., Ballendat, T., Boring, S., Greenberg, S. and Hinckley, K., Gradual Engagement between Digital Devices as a Function of Proximity: From Awareness to Progressive Reveal to Information Transfer. In Proceedings of ACM Interactive Tabletops & Surfaces (ITS 2012). Boston, MA, USA, November 11-14. 10pp. [PDF] [video – MP4].

Watch the Gradual Engagement via Proximity video on YouTube

GroupTogether — Exploring the Future of a Society of Devices

My latest paper discussing the GroupTogether system just appeared at the 2012 ACM Symposium on User Interface Software & Technology in Cambridge, MA.

GroupTogether video available on YouTube

I’m excited about this work — it really looks hard at what some of the next steps in sensing systems might be, particularly when one starts considering how users can most effectively interact with one another in the context of the rapidly proliferating Society of Devices we are currently witnessing.

I think our paper on the GroupTogether system, in particular, does a really nice job of exploring this with strong theoretical foundations drawn from the sociological literature.

F-formations are small groups of people engaged in a joint activity.

F-formations are the various type of small groups that people form when engaged in a joint activity.

GroupTogether starts by considering the natural small-group behaviors adopted by people who come together to accomplish some joint activity.  These small groups can take a variety of distinctive forms, and are known collectively in the sociological literature as f-formations. Think of those distinctive circles of people that form spontaneously at parties: typically they are limited to a maximum of about 5 people, the orientation of the partipants clearly defines an area inside the group that is distinct from the rest of the environment outside the group, and there are fairly well established social protocols for people entering and leaving the group.

A small group of two users as sensed by GroupTogether's overhead Kinect depth-cameras

A small group of two users as sensed via GroupTogether’s overhead Kinect depth-cameras.

GroupTogether also senses the subtle orientation cues of how users handle and posture their tablet computers. These cues are known as micro-mobility, a communicative strategy that people often employ with physical paper documents, such as when a sales representative orients a document towards to to direct your attention and indicate that it is your turn to sign, for example.

Our system, then, is the first to put small-group f-formations, sensed via overhead Kinect depth-camera tracking, in play simultaneously with the micro-mobility of slate computers, sensed via embedded accelerometers and gyros.

The GroupTogether prototype sensing environment and set-up

GroupTogether uses f-formations to give meaning to the micro-mobility of slate computers. It understands which users have come together in a small group, and which users have not. So you can just tilt your tablet towards a couple of friends standing near you to share content, whereas another person who may be nearby but facing the other way — and thus clearly outside of the social circle of the small group — would not be privy to the transaction. Thus, the techniques lower the barriers to sharing information in small-group settings.

Check out the video to see what these techniques look like in action, as well as to see how the system also considers groupings of people close to situated displays such as electronic whiteboards.

The full text of our scientific paper on GroupTogether and the citation is also available.

My co-author Nic Marquardt was the first author and delivered the talk. Saul Greenberg of the University of Calgary also contributed many great insights to the paper.

Image credits: Nic Marquardt

Paper: Cross-Device Interaction via Micro-mobility and F-formations (“GroupTogether”)

GroupTogetherMarquardt, N., Hinckley, K., and Greenberg, S., Cross-Device Interaction via Micro-mobility and F-formations.  In ACM UIST 2012 Symposium on User Interface Software and Technology (UIST ’12). ACM, New York, NY, USA,  Cambridge, MA, Oct. 7-10, 2012, pp. (TBA). [PDF] [video – WMV]. Known as the GroupTogether system.

See also my post with some further perspective on the GroupTogether project.

Watch the GroupTogether video on YouTube