It’s been a busy year, so I’ve been more than a little remiss in posting my Best Paper Award recipient from last year’s User Interface Software & Technology (UIST) symposium.
UIST is a great venue, particularly renowned for publishing cutting-edge innovations in devices, sensors, and hardware.
And software that makes clever uses thereof.

Title slide from my talk on this project. We had a lot of help, fortunately. The picture illustrates a typical scenario in pen & tablet interaction — where the user interacts with touch, but the pen is still at the ready, in this case palmed in the user’s fist.
The paper takes two long-standing research themes for me — pen (plus touch) interaction, and interesting new ways to use sensors — and smashes them together to produce the ultimate Frankenstein child of tablet computing:

Microsoft Research’s sensor pen. It’s covered in groovy orange shrink-wrap, too. What could be better than that? (The shrink wrap proved necessary to protect some delicate connections between our grip sensor and the embedded circuitry).
And if you were to unpack this orange-gauntleted beast, here’s what you’d find:

Components of the sensor pen, including inertial sensors, a AAAA battery, a Wacom mini pen, and a flexible capacitive substrate that wraps around the barrel of the pen.
But although the end-goal of the project is to explore the new possibilities afforded by sensor technology, in many ways, this paper kneads a well-worn old worry bead for me.
It’s all about the hand.
With little risk of exaggeration you could say that I’ve spent decades studying nothing but the hand. And how the hand is the window to your mind.
Or shall I say hands. How people coordinate their action. How people manipulate objects. How people hold things. How we engage with the world through the haptic sense, how we learn to articulate astoundingly skilled motions through our fingers without even being consciously aware that we’re doing anything at all.
I’ve constantly been staring at hands for over 20 years.
And yet I’m still constantly surprised.
People exhibit all sorts of manual behaviors, tics, and mannerisms, hiding in plain sight, that seemingly inhabit a strange shadow-world — the realm of the seen but unnoticed — because these behaviors are completely obvious yet somehow they still lurk just beneath conscious perception.
Nobody even notices them until some acute observer takes the trouble to point them out.
For example:
Take a behavior as simple as holding a pen in your hand.
You hold the pen to write, of course, but most people also tuck the pen between their fingers to momentarily stow it for later use. Other people do this in a different way, and instead palm the pen, in more of a power grip reminiscent of how you would grab a suitcase handle. Some people even interleave the two behaviors, based on what they are currently doing and whether or not they expect to use the pen again soon:

Illustration of tuck grip (left) vs. palm grip (right) methods of stowing the pen when it is temporarily not in use.
This seems very simple and obvious, at least in retrospect. But such behaviors have gone almost completely unnoticed in the literature, much less actively sensed by the tablets and pens that we use — or even leveraged to produce more natural user interfaces that can adapt to exactly how the user is currently handing and using their devices.
If we look deeper into these writing and tucking behaviors alone, a whole set of grips and postures of the hand emerge:

A simple design space of common pen grips and poses (postures of the hand) in pen and touch computing with tablets.
Looking even more deeply, once we have tablets that support a pen as well as full multi-touch, users naturally want to used their bare fingers on the screen in combination with the pen, so we see another range of manual behaviors that we call extension grips based on placing one (or more) fingers on the screen while holding the pen:

Much richness in “extension” grips, where touch is used while the pen is still being held, can also be observed. Here we see various single-finger extension grips for the tuck vs. the palm style of stowing the pen.
People also exhibited more ways of using multiple fingers on the touchscreen that I expected:

Likewise, people extend multiple fingers while holding the pen to pinch or otherwise interact with the touchscreen.
So, it began to dawn on us that there was all this untapped richness in terms of how people hold, manipulate, write on, and extend fingers when using pen and touch on tablets.
And that sensing this could enable some very interesting new possibilities for the user interfaces for stylus + tablet computing.
This is where our custom hardware came in.
On our pen, for example, we can sense subtle motions — using full 3D inertial sensors including accelerometer, gyroscope, and magnetometer — as well as sense how the user grips the pen — this time using a flexible capacitive substrate wrapped around the entire barrel of the pen.
These capabilities then give rise to sensor signals such as the following:
- Sensor signals for the pen’s capacitive grip sensor with the writing grip (left) vs. the tuck grip (middle). Exemplar motion signals are shown on the right.
This makes various pen grips and motions stand out quite distinctly, states that we can identify using some simple gesture recognition techniques.
Armed with these capabilities, we explored presenting a number of context-appropriate tools.
As the very simplest example, we can detect when you’re holding the pen in a grip (and posture) that indicates that you’re about to write. Why does this matter? Well, if the touchscreen responds when you plant your meaty palm on it, it causes no end of mischief in a touch-driven user interface. You’ll hit things by accident. Fire off gestures by mistake. Leave little “ink turds” (as we affectionately call them) on the screen if the application responds to touch by leaving an ink trace. But once we can sense it’s your palm, we can go a long ways towards solving these problems with pen-and-touch interaction.
To pull the next little rabbit out of my hat, if you tap the screen with the pen in hand, the pen tools (what else?) pop up:
But we can take this even further, such as to distinguish bare-handed touches — to support the standard panning and zooming behaviors — versus a pinch articulated with the pen-in-hand, which in this example brings up a magnifying glass particularly suited to detail work using the pen:

A pinch multi-touch gesture with the left hand pans and zooms. But a pinch articulated with the pen-in-hand brings up a magnifier tool for doing fine editing work.
Another really fun way to use the sensors — since we can sense the 3D orientation of the pen even when it is away from the screen — is to turn it into a digital airbrush:

Airbrushing with a pen. Note that the conic section of the resulting “spray” depends on the 3D orientation of the pen — just as it would with a real airbrush.
At any rate, it was a really fun project that garnered a best paper award, and a fair bit of press coverage (Gizmodo, Engadget, & named FastCo Design’s #2 User Interface innovation of 2014, among other coverage). It’s pretty hard to top that.
Unless maybe we do a lot more with all kinds of cool sensors on the tablet as well.
Hmmm…
You might just want to stay tuned here. There’s all kinds of great stuff in the works, as always (grin).
Hinckley, K., Pahud, M., Benko, H., Irani, P., Guimbretiere, F., Gavriliu, M., Chen, X., Matulic, F., Buxton, B., Wilson, A., Sensing Techniques for Tablet+Stylus Interaction. In the 27th ACM Symposium on User Interface Software and Technology (UIST’14). Honolulu, Hawaii, Oct 5-8, 2014, pp. 605-614. http://dx.doi.org/10.1145/2642918.2647379
- [PDF] [video – WMV] [Watch on YouTube]
- Best Paper Award (UIST 2014)
- Named FastCo Design’s #2 User Interface Innovation of 2014.
- I also have a talk available about this work, which I presented at the WIPPTE workshop.
Watch Context Sensing Techniques for Tablet+Stylus Interaction video on YouTube