Tag Archives: tilting user interfaces

ACM SIGMOBILE 2017 Test of Time Award for “Sensing Techniques for Mobile Interaction”

Recently the SIGMOBILE community recognized my turn-of-the-century research on mobile sensing techniques with one of their 2017 Test of Time Awards.

This was the paper (“Sensing Techniques for Mobile Interaction“) that first introduced techniques such as automatic screen rotation and raise-to-wake to mobile computing — features now taken for granted on the iPhones and tablets of the world.

The award committee motivated the award as follows:

This paper showed how combinations of simple sensors could be used to create rich mobile interactions that are now commonplace in mobile devices today. It also opened up people’s imaginations about how we could interact with mobile devices in the future, inspiring a wide range of research on sensor-based interaction techniques.

SIGMOBILE-Test-of-Time-Award-photo-700h

And so as not to miss the opportunity to have fun with the occasion, in the following video I reflected (at times irreverently) on the work — including  what I really thought about it at the time I was doing the research.

And some of the things that still surprise me about it after all these years.

You can find the original paper here.


Thumbnail - Ken Hinckley CHI Academy 2014 InducteeHinckley, K., Pierce, J., Sinclair, M., Horvitz, E. ACM SIGMOBILE 2017 Test of Time Award. [SIGMOBILE Test of Time Awards archive]

Paper: Sensing Techniques for Tablet+Stylus Interaction (Best Paper Award)

It’s been a busy year, so I’ve been more than a little remiss in posting my Best Paper Award recipient from last year’s User Interface Software & Technology (UIST) symposium.

UIST is a great venue, particularly renowned for publishing cutting-edge innovations in devices, sensors, and hardware.

And software that makes clever uses thereof.

Title slide - sensing techniques for stylus + tablet interaction

Title slide from my talk on this project. We had a lot of help, fortunately. The picture illustrates a typical scenario in pen & tablet interaction — where the user interacts with touch, but the pen is still at the ready, in this case palmed in the user’s fist.

The paper takes two long-standing research themes for me — pen (plus touch) interaction, and interesting new ways to use sensors — and smashes them together to produce the ultimate Frankenstein child of tablet computing:

Stylus prototype augmented with sensors

Microsoft Research’s sensor pen. It’s covered in groovy orange shrink-wrap, too. What could be better than that? (The shrink wrap proved necessary to protect some delicate connections between our grip sensor and the embedded circuitry).

And if you were to unpack this orange-gauntleted beast, here’s what you’d find:

Sensor components inside the pen

Components of the sensor pen, including inertial sensors, a AAAA battery, a Wacom mini pen, and a flexible capacitive substrate that wraps around the barrel of the pen.

But although the end-goal of the project is to explore the new possibilities afforded by sensor technology, in many ways, this paper kneads a well-worn old worry bead for me.

It’s all about the hand.

With little risk of exaggeration you could say that I’ve spent decades studying nothing but the hand. And how the hand is the window to your mind.

Or shall I say hands. How people coordinate their action. How people manipulate objects. How people hold things. How we engage with the world through the haptic sense, how we learn to articulate astoundingly skilled motions through our fingers without even being consciously aware that we’re doing anything at all.

I’ve constantly been staring at hands for over 20 years.

And yet I’m still constantly surprised.

People exhibit all sorts of manual behaviors, tics, and mannerisms, hiding in plain sight, that seemingly inhabit a strange shadow-world — the realm of the seen but unnoticed — because these behaviors are completely obvious yet somehow they still lurk just beneath conscious perception.

Nobody even notices them until some acute observer takes the trouble to point them out.

For example:

Take a behavior as simple as holding a pen in your hand.

You hold the pen to write, of course, but most people also tuck the pen between their fingers to momentarily stow it for later use. Other people do this in a different way, and instead palm the pen, in more of a power grip reminiscent of how you would grab a suitcase handle. Some people even interleave the two behaviors, based on what they are currently doing and whether or not they expect to use the pen again soon:

Tuck and Palm Grips for temporarily stowing a pen

Illustration of tuck grip (left) vs. palm grip (right) methods of stowing the pen when it is temporarily not in use.

This seems very simple and obvious, at least in retrospect. But such behaviors have gone almost completely unnoticed in the literature, much less actively sensed by the tablets and pens that we use — or even leveraged to produce more natural user interfaces that can adapt to exactly how the user is currently handing and using their devices.

If we look deeper into these writing and tucking behaviors alone, a whole set of grips and postures of the hand emerge:

Core Pen Grips

A simple design space of common pen grips and poses (postures of the hand) in pen and touch computing with tablets.

Looking even more deeply, once we have tablets that support a pen as well as full multi-touch, users naturally want to used their bare fingers on the screen in combination with the pen, so we see another range of manual behaviors that we call extension grips based on placing one (or more) fingers on the screen while holding the pen:

Single Finger Extension Grips for Touch Gestures with Pen-in-hand

Much richness in “extension” grips, where touch is used while the pen is still being held, can also be observed. Here we see various single-finger extension grips for the tuck vs. the palm style of stowing the pen.

People also exhibited more ways of using multiple fingers on the touchscreen that I expected:

Multiple Finger Extension Grips for Touch Gestures with Pen-in-hand

Likewise, people extend multiple fingers while holding the pen to pinch or otherwise interact with the touchscreen.

So, it began to dawn on us that there was all this untapped richness in terms of how people hold, manipulate, write on, and extend fingers when using pen and touch on tablets.

And that sensing this could enable some very interesting new possibilities for the user interfaces for stylus + tablet computing.

This is where our custom hardware came in.

On our pen, for example, we can sense subtle motions — using full 3D inertial sensors including accelerometer, gyroscope, and magnetometer — as well as sense how the user grips the pen — this time using a flexible capacitive substrate wrapped around the entire barrel of the pen.

These capabilities then give rise to sensor signals such as the following:

Grip and motion sensors on the stylus
Sensor signals for the pen’s capacitive grip sensor with the writing grip (left) vs. the tuck grip (middle). Exemplar motion signals are shown on the right.

This makes various pen grips and motions stand out quite distinctly, states that we can identify using some simple gesture recognition techniques.

Armed with these capabilities, we explored presenting a number of context-appropriate tools.

As the very simplest example, we can detect when you’re holding the pen in a grip (and posture) that indicates that you’re about to write. Why does this matter? Well, if the touchscreen responds when you plant your meaty palm on it, it causes no end of mischief in a touch-driven user interface. You’ll hit things by accident. Fire off gestures by mistake. Leave little “ink turds” (as we affectionately call them) on the screen if the application responds to touch by leaving an ink trace. But once we can sense it’s your palm, we can go a long ways towards solving these problems with pen-and-touch interaction.

To pull the next little rabbit out of my hat, if you tap the screen with the pen in hand, the pen tools (what else?) pop up:

Pen tools appear

Tools specific to the pen appear when the user taps on the screen with the pen stowed in hand.

But we can take this even further, such as to distinguish bare-handed touches — to support the standard panning and zooming behaviors —  versus a pinch articulated with the pen-in-hand, which in this example brings up a magnifying glass particularly suited to detail work using the pen:

Pen Grip + Motion example: Full canvas zoom vs. Magnifier tool

A pinch multi-touch gesture with the left hand pans and zooms. But a pinch articulated with the pen-in-hand brings up a magnifier tool for doing fine editing work.

Another really fun way to use the sensors — since we can sense the 3D orientation of the pen even when it is away from the screen — is to turn it into a digital airbrush:

Airbrush tool using the sensors

Airbrushing with a pen. Note that the conic section of the resulting “spray” depends on the 3D orientation of the pen — just as it would with a real airbrush.

At any rate, it was a really fun project that garnered a best paper award,  and a fair bit of press coverage (Gizmodo, Engadget, & named FastCo Design’s #2 User Interface innovation of 2014, among other coverage). It’s pretty hard to top that.

Unless maybe we do a lot more with all kinds of cool sensors on the tablet as well.

Hmmm…

You might just want to stay tuned here. There’s all kinds of great stuff in the works, as always (grin).


Sensing Pen & Tablet Grip+Motion thumbnailHinckley, K., Pahud, M., Benko, H., Irani, P., Guimbretiere, F., Gavriliu, M., Chen, X., Matulic, F., Buxton, B., Wilson, A., Sensing Techniques for Tablet+Stylus Interaction.  In the 27th ACM Symposium on User Interface Software and Technology (UIST’14)  Honolulu, Hawaii, Oct 5-8, 2014, pp. 605-614. http://dx.doi.org/10.1145/2642918.2647379

Watch Context Sensing Techniques for Tablet+Stylus Interaction video on YouTube

Paper: Toward Compound Navigation Tasks on Mobiles via Spatial Manipulation

I have three papers coming out this week at MobileHCI 2013, the 15th International Conference on Human-Computer Interaction with Mobile Devices and Services, which convenes this week in Munich. It’s one of the great small conferences that focuses exclusively on mobile interaction, which of course is a long-standing interest of mine.

This post focuses on the first of those papers, and right behind it will be short posts on the other two projects that my co-authors are presenting this week.

I’ve explored many directions for viewing and moving through information on small screens, often motivated by novel hardware sensors as well as basic insights about human motor and cognitive capabilities. And I also have a long history in three-dimensional (spatial) interaction, virtual environments, and the like. But despite doing this stuff for decades, every once in a while I still get surprised by experimental results.

That’s just part of what keeps this whole research gig fun and interesting. If the all answers were simple and obvious, there would be no point in doing the studies.

In this particular paper, my co-authors and I took a closer look at a long-standing spatial, or through-the-lens, metaphor for interaction– akin to navigating documents (or other information spaces) by looking through your mobile as if it were a camera viewfinder– and subjected it to experimental scrutiny.

While this basic idea of using your mobile as a viewport onto a larger virtual space has been around for a long time, the idea hasn’t been subjected to careful scrutiny in the context of moving a mobile device’s small screen as a way to view virtually larger documents. And the potential advantages of the approach have not been fully articulated and realized either.

This style of navigation (panning and zooming control) on mobile devices has great promise because it allows you to offload the navigation task itself to your nonpreferred hand, leaving your preferred hand free to do other things like carry bags of grocieries — or perform additional tasks such as annotation, selection, and tapping commands — on top of the resulting views.

But, as our study also shows, it is an approach not without its challenges; sensing the spatial position of the device, and devising an appropriate input mapping, are both difficult challenges that will need more progress to fully take advantage of this way of moving through information on a mobile device. For the time being, at least, the traditional touch gestures of pinch-to-zoom and drag-to-pan still appear to offer the most efficient solution for general-purpose navigation tasks.

Compound-Navigation-Mobiles-thumbPahud, M., Hinckley, K., Iqbal, S., Sellen, A., and Buxton, B., Toward Compound Navigation Tasks on Mobiles via Spatial Manipulation. In ACM 15th International Conference on Human-Computer Interaction with Mobile Devices and Services, (MobileHCI 2013), Munich, Germany, Aug. 27-30, 2013, pp. 113-122. [PDF] [video – MP4]

Toward Compound Navigation on Mobiles via Spatial Manipulation on YouTube

Paper: Motion and Context Sensing Techniques for Pen Computing

I continue to believe that stylus input — annotations, sketches, mark-up, and gestures — will be an important aspect of interaction with slate computers in the future, particularly when used effectively and convincingly with multi-modal pen+touch input. It also seems that every couple of years I stumble across an interesting new use or set of techniques for motion sensors, and this year proved to be no exception.

Thus, it should come as no surprise that my latest project has continued to push in this direction, exploring the possibilities for pen interaction when the physical stylus itself is augmented with inertial sensors including three-axis accelerometers, gyros, and magnetometers.

Figure-1-Sensor-Pen-hardware

In recent years such sensors have become integrated with all manner of gadgets, including smart phones and tablets, and it is increasingly common for microprocessors to include such sensors directly on the die. Hence in my view of the world, we are just at the cusp of sensor-rich stylus devices becoming  commercially feasible, so it is only natural to consider how such sensors afford new interactions, gestures, or context-sensing techniques when integrated directly with an active (powered) stylus on pen-operated devices.

In collaboration with Xiang ‘Anthony’ Chen and Hrvoje Benko I recently published a paper exploring motion-sensing capabilities for electronic styluses, which takes a first look at some techniques for such a device. With some timely help from Tom Blank’s brilliant devices team at Microsoft Research, we built a custom stylus — fully wireless and powered by an AAAA battery — that integrates these sensors.

These range from very simple but clever things such as reminding the user if they have left behind the pen — a common problem that users encounter with pen-based devices — to fun new techniques that emulate physical media, such as the gesture of striking a loaded brush on one’s finger in water media.

fig-ink-spatter

Check out the video below for an overview of these and some of the other techniques we have come up with so far, or read more about it in the technical paper linked below.

We are continuing to work in this area, and have lots more ideas that go beyond what we were able to accomplish in this first stage of the project, so stay tuned for future developments along these lines.

Motion-Context-Pen-thumbHinckley, K., Chen, X., and Benko, H., Motion and Context Sensing Techniques for Pen
Computing. 
In Proc. Graphics Interface 2013 (GI’13).  Canadian Information Processing Society, Toronto, Ont., CanadaRegina, Saskatchewan, Canada, May 29-31, 2013. [PDF] [video – MP4].

Watch Motion and Context Sensing Techniques for Pen Computing video on YouTube

Paper: Gradual Engagement between Digital Devices as a Function of Proximity: From Awareness to Progressive Reveal to Information Transfer

I collaborated on a nifty project with the fine folks from Saul Greenberg’s group at the University of Calgary exploring the emerging possibilities for devices to sense and respond to their digital ecology. When devices have fine-grained sensing of their spatial relationships to one another, as well as to the people in that space, it brings about new ways for users to interact with the resulting system of cooperating devices and displays.

This fine-grained sensing approach makes for an interesting contrast to what Nic Marquardt and I explored in GroupTogether, which intentionally took a more conservative approach towards the sensing infrastructure — with the idea in mind that sometimes, one can still do a lot with very little (sensing).

Taken together, the two papers nicely bracket some possibilities for the future of cross-device interactions and intelligent environments.

This work really underscores that we are still largely in the dark ages with regard to such possibilities for digital ecologies. As new sensors and sensing systems make this kind of rich awareness of the surround of devices and users possible, our devices, operating systems, and user experiences will grow to encompass the expanded horizons of these new possibilities as well.

The full citation and the link to our scientific paper are as follows:

Gradual Engagement with devices via proximity sensingMarquardt, N., Ballendat, T., Boring, S., Greenberg, S. and Hinckley, K., Gradual Engagement between Digital Devices as a Function of Proximity: From Awareness to Progressive Reveal to Information Transfer. In Proceedings of ACM Interactive Tabletops & Surfaces (ITS 2012). Boston, MA, USA, November 11-14. 10pp. [PDF] [video – MP4].

Watch the Gradual Engagement via Proximity video on YouTube

GroupTogether — Exploring the Future of a Society of Devices

My latest paper discussing the GroupTogether system just appeared at the 2012 ACM Symposium on User Interface Software & Technology in Cambridge, MA.

GroupTogether video available on YouTube

I’m excited about this work — it really looks hard at what some of the next steps in sensing systems might be, particularly when one starts considering how users can most effectively interact with one another in the context of the rapidly proliferating Society of Devices we are currently witnessing.

I think our paper on the GroupTogether system, in particular, does a really nice job of exploring this with strong theoretical foundations drawn from the sociological literature.

F-formations are small groups of people engaged in a joint activity.

F-formations are the various type of small groups that people form when engaged in a joint activity.

GroupTogether starts by considering the natural small-group behaviors adopted by people who come together to accomplish some joint activity.  These small groups can take a variety of distinctive forms, and are known collectively in the sociological literature as f-formations. Think of those distinctive circles of people that form spontaneously at parties: typically they are limited to a maximum of about 5 people, the orientation of the partipants clearly defines an area inside the group that is distinct from the rest of the environment outside the group, and there are fairly well established social protocols for people entering and leaving the group.

A small group of two users as sensed by GroupTogether's overhead Kinect depth-cameras

A small group of two users as sensed via GroupTogether’s overhead Kinect depth-cameras.

GroupTogether also senses the subtle orientation cues of how users handle and posture their tablet computers. These cues are known as micro-mobility, a communicative strategy that people often employ with physical paper documents, such as when a sales representative orients a document towards to to direct your attention and indicate that it is your turn to sign, for example.

Our system, then, is the first to put small-group f-formations, sensed via overhead Kinect depth-camera tracking, in play simultaneously with the micro-mobility of slate computers, sensed via embedded accelerometers and gyros.

The GroupTogether prototype sensing environment and set-up

GroupTogether uses f-formations to give meaning to the micro-mobility of slate computers. It understands which users have come together in a small group, and which users have not. So you can just tilt your tablet towards a couple of friends standing near you to share content, whereas another person who may be nearby but facing the other way — and thus clearly outside of the social circle of the small group — would not be privy to the transaction. Thus, the techniques lower the barriers to sharing information in small-group settings.

Check out the video to see what these techniques look like in action, as well as to see how the system also considers groupings of people close to situated displays such as electronic whiteboards.

The full text of our scientific paper on GroupTogether and the citation is also available.

My co-author Nic Marquardt was the first author and delivered the talk. Saul Greenberg of the University of Calgary also contributed many great insights to the paper.

Image credits: Nic Marquardt

Paper: Cross-Device Interaction via Micro-mobility and F-formations (“GroupTogether”)

GroupTogetherMarquardt, N., Hinckley, K., and Greenberg, S., Cross-Device Interaction via Micro-mobility and F-formations.  In ACM UIST 2012 Symposium on User Interface Software and Technology (UIST ’12). ACM, New York, NY, USA,  Cambridge, MA, Oct. 7-10, 2012, pp. (TBA). [PDF] [video – WMV]. Known as the GroupTogether system.

See also my post with some further perspective on the GroupTogether project.

Watch the GroupTogether video on YouTube