Category Archives: Personal Space & Proxemics

Paper: Pre-Touch Sensing for Mobile Interaction

I have to admit it: I feel as if I’m looking at the sunrise of what may be a whole new way of interacting with mobile devices.

When I think about it, the possibilities bathe my eyes in a golden glow, and the warmth drums against my skin.

And in particular, my latest research peers out across this vivid horizon, to where I see touch — and mobile interaction with touchscreens in particular — evolving in the near future.

As a seasoned researcher, my job (which in reality is some strange admixture of interaction design, innovator, and futurist) is not necessarily to predict the future, but rather to invent it via extrapolation from a sort of visionary present which occupies my waking dreams.

I see things not as they are, but as they could be, through the lens afforded by a (usually optimistic) extrapolation from extant technologies, or those I know are likely to soon become more widely available.

With regards to interaction with touchscreens in particular, it has been clear to me for some time that the ability to sense the fingers as they approach the device — well before contact with the screen itself — is destined to become commonplace on commodity devices.

This is interesting for a number of reasons.

And no, the ability to do goofy gestures above the screen, waving at it frantically (as if it were a fancy-pants towel dispenser in a public restroom) in some dim hope of receiving an affirmative response, is not one of them.

In terms of human capabilities, one obviously cannot touch the screen of a mobile device without approaching it first.

But what often goes unrecognized is that one also must hold the device, typically in the non-preferred hand, as a precursor to touch. Hence, how you hold the device — the pattern of your grip and which hand you hold it in — are additional details of context that are more-or-less wholly ignored by current mobile devices.

So in this new work, my colleagues and I collectively refer to these two precursors of touch — approach and the need to grip the device — as pre-touch.

And it is my staunch belief that the ability to sense such pre-touch information could radically transform the mobile ‘touch’ interfaces that we all have come to take for granted.

You can get a sense of these possibilities, all implemented on a fully functional mobile phone with pre-touch sensing capability, in our demo reel below:

The project received a lot of attention, and coverage from many of the major tech blogs and other media outlets, for example:

  • The Verge (“Microsoft’s hover gestures for Windows phones are magnificent”)
  • SlashGear (“Smartphones next big thing: ‘Pre-Touch’”)
  • Business Insider (“Apple should definitely copy Microsoft’s incredible finger-sensing smartphone technology”)
  • And Fast Company Design (and again in “8 Incredible Prototypes That Show The Future Of Human-Computer Interaction.”)

But I rather liked the take that Silicon Angle offered, which took my concluding statement from the video above:

Taken as a whole, our exploration of pre-touch hints that the evolution of mobile touch may still be in its infancy – with many possibilities, unbounded by the flatland of the touchscreen, yet to explore.

 And then responded as follows:

This is the moon-landing-esque conclusion Microsoft comes to after demonstrating its rather cool pre-touch mobile technology, i.e., a mobile phone that senses what your fingers are about to do.

While this evolution of touch has been coming in the research literature for at least a decade now, what exactly to do with above- and around-screen sensing (especially in a mobile setting) has been far from obvious. And that’s where I think our work on pre-touch sensing techniques for mobile interaction distinguishes itself, and in so doing identifies some very interesting use cases that have never been realized before.

The very best of these new techniques possess a quality that I love, namely that they have a certain surprising obviousness to them:

The techniques seem obvious — but only in retrospect.

And only after you’ve been surprised by the new idea or insight that lurks behind them.

If such an effort is indeed the first hint of a moonshot for touch, well, that’s a legacy for this project that I can live with.

UPDATE: The talk I gave at the CHI 2016 conference on this project is now available. Have a gander if you are so inclined.


Thumb sensed as it hovers over pre-touch mobile phoneKen Hinckley, Seongkook Heo, Michel Pahud, Christian Holz, Hrvoje Benko, Abigail Sellen, Richard Banks, Kenton O’Hara, Gavin Smyth, William Buxton. 2016. Pre-Touch Sensing for Mobile Interaction. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (CHI ’16). ACM, New York, NY, USA, p. 2869-2881. San Jose, CA, May 7-12, 2016.

[PDF] [Talk slides PPTX] [video – MP4] [30 second preview – MP4] [Watch on YouTube]

Watch Pre-Touch Sensing for Mobile Interaction video on YouTube


Paper: Sensing Tablet Grasp + Micro-mobility for Active Reading

Lately I have been thinking about touch:

In the tablet-computer sense of the word.

To most people, this means the touchscreen. The intentional pokes and swipes and pinching gestures we would use to interact with a display.

But not to me.

Touch goes far beyond that.

Look at people’s natural behavior. When they refer to a book, or pass a document to a collaborator, there are two interesting behaviors that characterize the activity.

What I call the seen but unnoticed:

Simple habits and social cues, there all the time, but which fall below our conscious attention — if they are even noticed at all.

By way of example, let’s say we’re observing someone handle a magazine.

First, the person has to grasp the magazine. Seems obvious, but easy to overlook — and perhaps vital to understand. Although grasp typically doesn’t involve contact of the fingers with the touchscreen, this is a form of ‘touch’ nonetheless, even if it is one that traditionally hasn’t been sensed by computers.

Grasp reveals a lot about the intended use, whether the person might be preparing to pick up the magazine or pass it off, or perhaps settling down for a deep and immersive engagement with the material.

Second, as an inevitable consequence of grasping the magazine, it must move. Again, at first blush this seems obvious. But these movements may be overt, or they may be quite subtle. And to a keen eye — or an astute sensing system — they are a natural consequence of grasp, and indeed are what give grasp its meaning.

In this way, sensing grasp informs the detection of movements.

And, coming full circle, the movements thus detected enrich what we can glean from grasp as well.

Yet, this interplay of grasp and movement has rarely been recognized, much less actively sensed and used to enrich and inform interaction with tablet computers.

And this feeds back into a larger point that I have often found myself trying to make lately, namely that touch is about far more than interaction with the touch-screen alone.

If we want to really understand touch (as well as its future as a technology) then we need to deeply understand these other modalities — grasp and movement, and perhaps many more — and thereby draw out the full naturalness and expressivity of interaction with tablets (and mobile phones, and e-readers, and wearables, and many dreamed-of form-factors perhaps yet to come).

My latest publication looks into all of these questions, particularly as they pertain to reading electronic documents on tablets.

We constructed a tablet (albeit a green metallic beast of one at present) that can detect natural grips along its edges and on the entire back surface of the device. And with a full complement of inertial motion sensors, as well. This image shows the grip-sensing (back) side of our technological monstrosity:

Grip Sensing Tablet Hardware

But this set-up allowed us to explore ways of combining grip and subtle motion (what has sometimes been termed micro-mobility in the literature), resulting in the following techniques (among a number of others):

A Single User ENGAGING with a Single Device

Some of these techniques address the experience of an individual engaging with their own reading material.

For example, you can hold a bookmark with your thumb (much as you can keep your finger on a page in physical book) and then tip the device. This flips back to the page that you’re holding:


This ‘Tip-to-Flip’ interaction  involves both the grip and the movement of the device and results in a fairly natural interaction that builds on a familiar habit from everyday experience with physical documents.

Another one we experimented with was a very subtle interaction that mimics holding a document and angling it up to inspect it more closely. When we sense this, the tablet zooms in slightly on the page, while removing all peripheral distractions such as menu-bars and icons:

Immersive Reading mode through grip sensing

This immerses the reader in the content, rather than the iconographic gewgaws which typically border the screen of an application as if to announce, “This is a computer!”

Multiple Users Collaborating around a Single Device

Another set of techniques we explored looked at how people pass devices to one another.

In everyday experience, passing a paper document to a collaborator is a very natural — and different — form of “sharing,” as compared to the oft-frustrating electronic equivalents we have at our disposal.

Likewise, computers should be able to sense and recognize such gestures in the real world, and use them to bring some of the socially and situationally appropriate sharing that they afford to the world of electronic documents.

We explored one such technique that automatically sets up a guest profile when you hand a tablet (displaying a specific document) to another user:


The other user can then read and mark-up that document, but he is not the beneficiary of a permanent electronic copy of it (as would be the case if you emailed him an attachment), nor is he permitted to navigate to other areas or look at other files on your tablet.

You’ve physically passed him the electronic document, and all he can do is look at it and mark it up with a pen.

Not unlike the semantics — long absent and sorely missed in computing — of a simple a piece of paper.

A Single User Working With Multiple Devices

A final area we looked at considers what happens when people work across multiple tablets.

We already live in a world where people own and use multiple devices, often side-by-side, yet our devices typically have little or no awareness of one another.

But contrast this to the messy state of people’s physical desks, with documents strewn all over. People often place documents side-by-side as a lightweight and informal way of organization, and might dexterously pick one up or hold it at the ready for quick reference when engaged in an intellectually demanding task.

Again, missing from the world of the tablet computer.

But by sensing which tablets you hold, or pick up, our system allows people to quickly refer to and cross-reference content across federations of such devices.

While the “Internet of Things” may be all the rage these days among the avant-garde of computing, such federations remain uncommon and in our view represent the future of a ‘Society of Devices’ that can recognize and interact with one another, all while respecting social mores, not the least of which are the subtle “seen but unnoticed” social cues afforded by grasping, moving, and orienting our devices.


Closing ThoughtS:

An ExpanDED Perspective OF ‘TOUCH’

The examples above represent just a few simple steps. Much more can, and should, be done to fully explore and vet these directions.

But by viewing touch as far more than simple contact of the fingers with a grubby touchscreen — and expanding our view to consider grasp, movement of the device, and perhaps other qualities of the interaction that could be sensed in the future as well — our work hints at a far wider perspective.

A perspective teeming with the possibilities that would be raised by a society of mobile appliances with rich sensing capabilities, potentially leading us to far more natural, more expressive, and more creative ways of engaging in the knowledge work of the future.



Sensing-Tablet-Grasp-Micro-Mobility-UIST-2015-thumbDongwook Yoon, Ken Hinckley, Hrvoje Benko, François Guimbretière, Pourang Irani, Michel Pahud, and Marcel Gavriliu. 2015. Sensing Tablet Grasp + Micro-mobility for Active Reading. In Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology (UIST ’15). ACM, New York, NY, USA, 477-487. Charlotte, NC, Nov. 8-11, 2015.
[PDF] [Talk slides PPTX] [video – MP4] [30 second preview – MP4] [Watch on YouTube]

Watch Sensing Tablet Grasp + Micro-mobility for Active Reading video on YouTube

Paper: Gradual Engagement between Digital Devices as a Function of Proximity: From Awareness to Progressive Reveal to Information Transfer

I collaborated on a nifty project with the fine folks from Saul Greenberg’s group at the University of Calgary exploring the emerging possibilities for devices to sense and respond to their digital ecology. When devices have fine-grained sensing of their spatial relationships to one another, as well as to the people in that space, it brings about new ways for users to interact with the resulting system of cooperating devices and displays.

This fine-grained sensing approach makes for an interesting contrast to what Nic Marquardt and I explored in GroupTogether, which intentionally took a more conservative approach towards the sensing infrastructure — with the idea in mind that sometimes, one can still do a lot with very little (sensing).

Taken together, the two papers nicely bracket some possibilities for the future of cross-device interactions and intelligent environments.

This work really underscores that we are still largely in the dark ages with regard to such possibilities for digital ecologies. As new sensors and sensing systems make this kind of rich awareness of the surround of devices and users possible, our devices, operating systems, and user experiences will grow to encompass the expanded horizons of these new possibilities as well.

The full citation and the link to our scientific paper are as follows:

Gradual Engagement with devices via proximity sensingMarquardt, N., Ballendat, T., Boring, S., Greenberg, S. and Hinckley, K., Gradual Engagement between Digital Devices as a Function of Proximity: From Awareness to Progressive Reveal to Information Transfer. In Proceedings of ACM Interactive Tabletops & Surfaces (ITS 2012). Boston, MA, USA, November 11-14. 10pp. [PDF] [video – MP4].

Watch the Gradual Engagement via Proximity video on YouTube

GroupTogether — Exploring the Future of a Society of Devices

My latest paper discussing the GroupTogether system just appeared at the 2012 ACM Symposium on User Interface Software & Technology in Cambridge, MA.

GroupTogether video available on YouTube

I’m excited about this work — it really looks hard at what some of the next steps in sensing systems might be, particularly when one starts considering how users can most effectively interact with one another in the context of the rapidly proliferating Society of Devices we are currently witnessing.

I think our paper on the GroupTogether system, in particular, does a really nice job of exploring this with strong theoretical foundations drawn from the sociological literature.

F-formations are small groups of people engaged in a joint activity.

F-formations are the various type of small groups that people form when engaged in a joint activity.

GroupTogether starts by considering the natural small-group behaviors adopted by people who come together to accomplish some joint activity.  These small groups can take a variety of distinctive forms, and are known collectively in the sociological literature as f-formations. Think of those distinctive circles of people that form spontaneously at parties: typically they are limited to a maximum of about 5 people, the orientation of the partipants clearly defines an area inside the group that is distinct from the rest of the environment outside the group, and there are fairly well established social protocols for people entering and leaving the group.

A small group of two users as sensed by GroupTogether's overhead Kinect depth-cameras

A small group of two users as sensed via GroupTogether’s overhead Kinect depth-cameras.

GroupTogether also senses the subtle orientation cues of how users handle and posture their tablet computers. These cues are known as micro-mobility, a communicative strategy that people often employ with physical paper documents, such as when a sales representative orients a document towards to to direct your attention and indicate that it is your turn to sign, for example.

Our system, then, is the first to put small-group f-formations, sensed via overhead Kinect depth-camera tracking, in play simultaneously with the micro-mobility of slate computers, sensed via embedded accelerometers and gyros.

The GroupTogether prototype sensing environment and set-up

GroupTogether uses f-formations to give meaning to the micro-mobility of slate computers. It understands which users have come together in a small group, and which users have not. So you can just tilt your tablet towards a couple of friends standing near you to share content, whereas another person who may be nearby but facing the other way — and thus clearly outside of the social circle of the small group — would not be privy to the transaction. Thus, the techniques lower the barriers to sharing information in small-group settings.

Check out the video to see what these techniques look like in action, as well as to see how the system also considers groupings of people close to situated displays such as electronic whiteboards.

The full text of our scientific paper on GroupTogether and the citation is also available.

My co-author Nic Marquardt was the first author and delivered the talk. Saul Greenberg of the University of Calgary also contributed many great insights to the paper.

Image credits: Nic Marquardt

Paper: Cross-Device Interaction via Micro-mobility and F-formations (“GroupTogether”)

GroupTogetherMarquardt, N., Hinckley, K., and Greenberg, S., Cross-Device Interaction via Micro-mobility and F-formations.  In ACM UIST 2012 Symposium on User Interface Software and Technology (UIST ’12). ACM, New York, NY, USA,  Cambridge, MA, Oct. 7-10, 2012, pp. (TBA). [PDF] [video – WMV]. Known as the GroupTogether system.

See also my post with some further perspective on the GroupTogether project.

Watch the GroupTogether video on YouTube

Paper: CodeSpace: Touch + Air Gesture Hybrid Interactions for Supporting Developer Meetings

CodeSpace systemBragdon, A., DeLine, R., Hinckley, K., and Morris, M. R., Code space: Touch + Air Gesture Hybrid Interactions for Supporting Developer Meetings.  In Proc. ACM International Conference on Interactive Tabletops and Surfaces (ITS ’11). ACM, New York, NY, USA,  Kobe, Japan, November 13-16, 2011, pp. 212-221. [PDF] [video – WMV]. As featured on Engadget and many other online forums.

Watch CodeSpace video on YouTube

Book Chapter: Input Technologies and Techniques, 2012 Edition

Input Technologies and Techniques, 3rd EditionHinckley, K., Wigdor, D., Input Technologies and Techniques. Chapter 9 in The Human-Computer Interaction Handbook – Fundamentals, Evolving Technologies and Emerging Applications, Third Edition, ed. by Jacko, J., Published by Taylor & Francis. To appear. [PDF of author’s manuscript – not final]

This is an extensive revision of the 2007 and 2002 editions of my book chapter, and with some heavy weight-lifting from my new co-author Daniel Wigdor, it treats direct-touch input devices and techniques in much more depth. Lots of great new stuff. The book will be out in early 2012 or so from Taylor & Francis – keep an eye out for it!