Category Archives: pen + touch

Paper: Wearables as Context for Guiard-abiding Bimanual Touch

This particular paper has a rather academic-sounding title, but at its heart it makes a very simple and interesting observation regarding touch that any user of touch-screen technology can perhaps appreciate.

The irony is this: when interaction designers talk about “natural” interaction, they often have touch input in mind. And so people tend to take that for granted. What could be simpler than placing a finger — or with the modern miracle of multi-touch, multiple fingers — on a display?

And indeed, an entire industry of devices and form-factors — everything from phones, tablets, drafting-tables, all the way up to large wall displays — has arisen from this assumption.

Yet, if we unpack “touch” as it’s currently realized on most touchscreens, we can see that it remains very much a poor man’s version of natural human touch.

For example, on a large electronic-whiteboard such as the 84″ Surface Hub, multiple people can work upon the display at the same time. And it feels natural to employ both hands — as one often does in a wide assortment of everyday manual activities, such as indicating a point on a whiteboard with your off-hand as you emphasize the same point with the marker (or electronic pen).

Yet much of this richness — obvious to anyone observing a colleague at a whiteboard — represents context that is completely lost with “touch” as manifest in the vast majority of existing touch-screen devices.

For example:

  • Who is touching the display?
  • Are they touching the display with one hand, or two?
  • And if two hands, which of the multiple touch-events generated come from the right hand, and which come from the left?

Well, when dealing with input to computers, the all-too-common answer from the interaction designer is a shrug, a mumbled “who the heck knows,” and a litany of assumptions built into the user interface to try and paper over the resulting ambiguities, especially when the two factors (which user, and which hand) compound one another.

The result is that such issues tend to get swept under the rug, and hardly anybody ever mentions them.

But the first step towards a solution is recognizing that we have a problem.

This paper explores the implications of one particular solution that we have prototyped, namely leveraging wearable devices on the user’s body as sensors that can augment the richness of touch events.

A fitness band worn on the non-preferred hand, for example, can sense the impulse resulting from making finger-contact with a display through its embedded motion sensors (accelerometers and gyros). If the fitness band and the display exchange information and id’s, the touch-event generated can then be associated with the left hand of a particular user. The inputs of multiple users instrumented in this manner can then be separated from one another, as well, and used as a lightweight form of authentication.

That then explains the “wearable” part of “Wearables as Context for Guiard-abiding Bimanual Touch,” the title of my most recent paper, but what the heck does “Guiard-abiding” mean?

Well, this is a reference to classic work by a research colleague, Yves Guiard, who is famous for a 1987 paper in which he made a number of key observations regarding how people use their hands — both of them — in everyday manual tasks.

Particularly, in a skilled manipulative task such as writing on a piece of paper, Yves pointed out (assuming a right-handed individual) three general principles:

  • Left hand precedence: The action of the left hand precedes the action of the right; the non-preferred hand first positions and orients the piece of paper, and only then does the pen (held in the preferred hand, of course) begin to write.
  • Differentiation in scale: The action of the left hand tends to occur at a larger temporal and spatial scale of motion; the positioning (and re-positioning) of the paper tends to be infrequent and relatively coarse compared to the high-frequency, precise motions of the pen in the preferred hand.
  • Right-to-Left Spatial Reference: The left hand sets a frame of reference for the action of the right; the left hand defines the position and orientation of the work-space into which the preferred hand inserts its contributions, in this example via the manipulation of a hand-held implement — the pen.

Well, as it turns out these three principles are very deep and general, and they can yield great insight into how to design interactions that fully take advantage of people’s everyday skills for two-handed (“bimanual”) manipulation — another aspect of “touch” that interaction designers have yet to fully leverage for natural interaction with computers.

This paper is a long way from a complete solution to the paucity of modern touch-screens but hopefully by pointing out the problem and illustrating some consequences of augmenting touch with additional context (whether provided through wearables or other means), this work can lead to more truly “natural” touch interaction — allowing for simultaneous interaction by multiple users, both of whom can make full and complementary use of their hard-won manual skill with both hands — in the near future.


Wearables (fitness band and ring) provide missing context (who touches, and with what hand) for direct-touch bimanual interactions.Andrew M. Webb, Michel Pahud, Ken Hinckley, and Bill Buxton. 2016. Wearables as Context for Guiard-abiding Bimanual Touch. In Proceedings of the 29th Annual ACM Symposium on User Interface Software and Technology (UIST ’16). ACM, New York, NY, USA, 287-300. Tokyo, Japan, Oct. 16-19, 2016. https://doi.org/10.1145/2984511.2984564
[PDF] [Talk slides PDF] [Full video – MP4] [Watch 30 second preview on YouTube]

Advertisements

Book Chapter: Inking Outside the Box — How Context Sensing Affords More Natural Pen (and Touch) Computing

“Pen” and “Touch” are terms that tend to be taken for granted these days in the context of interaction with mobiles, tablets, and electronic-whiteboards alike.

Yet, as I have discussed in many articles here, even in the simplest combination of these modalities — that of “Pen + Touch” — new opportunities for interaction design abound.

And from this perspective we can go much further still.

Take “touch,” for example.

What does this term really mean in the context of input to computers?

Is it just when the user intentionally moves a finger into contact with the screen?

What if the palm accidentally brushes the display instead — is that still “touch?”

Or how about the off-hand, which plays a critical but oft-unnoticed role in gripping and skillfully orienting the device for the action of the preferred hand? Isn’t that an important part of “touch” as well?

Well, there’s good reason to argue that from the human perspective, these are all “touch,” even though most existing devices only generate a touch-event at the moment when a finger comes into contact with the screen.

Clearly, this is a very limited view, and clearly with greater insight of the context surrounding a particular touch (or pen, or pen + touch) event, we could enhance the naturalness of working with computers considerably.

This chapter, then, works through a series of examples and perspectives which demonstrate how much richness there is in such a re-conception of direct interaction with computers, and thereby suggests some directions for future innovations and richer, far more expressive interactions.


Thumbnail - Inking Outside the Box book chapterHinckley, K., Buxton, B., Inking Outside the Box: How Context Sensing Affords More Natural Pen (and Touch) Computing. 2016. Appears as Chapter 3 in Revolutionizing Education with Digital Ink: The Impact of Pen and Touch Technology on Education (Human-Computer Interaction Series), First Edition (2016). Ed. by Tracy Hammond, Stephanie Valentine, & Aaron Adler. Published by Springer, June 13, 2016.  [PDF – Author’s Draft]

P.S.: I’ve linked to the draft of the chapter that I submitted to the publisher, rather than the final version, as the published copy-edit muddied the writing by a gross misapplication of the Chicago Manual of Style, and in so doing introduced many semantic errors as well. Despite my best efforts I was not able to convince the publisher to fully reverse these undesired and unfortunate “improvements.” As such, my draft may contain some typographical errors or other minor discrepancies from the published version, but it is the authoritative version as far as I am concerned.

Olfactory Pen Creates Giant Stink, Fails to Make it out of Research Skunkworks

Microsoft has shown incredible stuff this week at \\build around Pen and Ink experiences — including simultaneous Pen + Touch experiences — as showcased for example in the great video on “Inking at the Speed of Thought” that is now available on Channel 9.

But I’ve had a skunkworks project — so to speak — in the works as part of my research (in the course of a career spanning decades) for a long time now, and this particular vision of the future of pen computing has consumed my imagination for at least the last 37 seconds or so. I’ve put a lot of thought into it.

It’s long been recognized that the sense of smell is a powerful index into the human memory. The scent of decaying pulp instantly brings to mind a favorite book, for example — in my case a volume of the masterworks of Edgar Allen Poe that was bequeathed to me by my grandfather.

Or who can ever forget the dizzying scent of their first significant other?

So I thought: Why not a digital pen with olfactory output?

Just think of the possibilities for this remarkable technology:

Not only can you ink faster than the speed of thought, but now you can stink faster than the speed of thought!

And I’m here to tell you that this is entirely possible. I think. I’ve already conceived of an amazing confabulation called the Aromatic Recombinator (patent pending; filed April 1st, 2016 at 2:55 PM; summarily rejected by patent office, 2:57 PM; earnest appeal filed in hope of an affirmative response, 2:59 PM; earnest response received: TBA).

Nonetheless, I can understand the patent office’s reticence.

Because with this remarkable technology one can arouse almost any scent, from the headiest of perfumes all the way to the most cloying musk, simply by scribbling on the screen of your tablet as if it were an electronic scratch-n-sniff card. A conception on which I have another patent pending, by the way.

Admittedly, some details remain sketchy, but I remain highly optimistic that the obvious problems can be sniffed out in short order.

And if not, rest assured, I will raise one hell of a stink.

[Happy April Fools Day.]

Paper: Sensing Techniques for Tablet+Stylus Interaction (Best Paper Award)

It’s been a busy year, so I’ve been more than a little remiss in posting my Best Paper Award recipient from last year’s User Interface Software & Technology (UIST) symposium.

UIST is a great venue, particularly renowned for publishing cutting-edge innovations in devices, sensors, and hardware.

And software that makes clever uses thereof.

Title slide - sensing techniques for stylus + tablet interaction

Title slide from my talk on this project. We had a lot of help, fortunately. The picture illustrates a typical scenario in pen & tablet interaction — where the user interacts with touch, but the pen is still at the ready, in this case palmed in the user’s fist.

The paper takes two long-standing research themes for me — pen (plus touch) interaction, and interesting new ways to use sensors — and smashes them together to produce the ultimate Frankenstein child of tablet computing:

Stylus prototype augmented with sensors

Microsoft Research’s sensor pen. It’s covered in groovy orange shrink-wrap, too. What could be better than that? (The shrink wrap proved necessary to protect some delicate connections between our grip sensor and the embedded circuitry).

And if you were to unpack this orange-gauntleted beast, here’s what you’d find:

Sensor components inside the pen

Components of the sensor pen, including inertial sensors, a AAAA battery, a Wacom mini pen, and a flexible capacitive substrate that wraps around the barrel of the pen.

But although the end-goal of the project is to explore the new possibilities afforded by sensor technology, in many ways, this paper kneads a well-worn old worry bead for me.

It’s all about the hand.

With little risk of exaggeration you could say that I’ve spent decades studying nothing but the hand. And how the hand is the window to your mind.

Or shall I say hands. How people coordinate their action. How people manipulate objects. How people hold things. How we engage with the world through the haptic sense, how we learn to articulate astoundingly skilled motions through our fingers without even being consciously aware that we’re doing anything at all.

I’ve constantly been staring at hands for over 20 years.

And yet I’m still constantly surprised.

People exhibit all sorts of manual behaviors, tics, and mannerisms, hiding in plain sight, that seemingly inhabit a strange shadow-world — the realm of the seen but unnoticed — because these behaviors are completely obvious yet somehow they still lurk just beneath conscious perception.

Nobody even notices them until some acute observer takes the trouble to point them out.

For example:

Take a behavior as simple as holding a pen in your hand.

You hold the pen to write, of course, but most people also tuck the pen between their fingers to momentarily stow it for later use. Other people do this in a different way, and instead palm the pen, in more of a power grip reminiscent of how you would grab a suitcase handle. Some people even interleave the two behaviors, based on what they are currently doing and whether or not they expect to use the pen again soon:

Tuck and Palm Grips for temporarily stowing a pen

Illustration of tuck grip (left) vs. palm grip (right) methods of stowing the pen when it is temporarily not in use.

This seems very simple and obvious, at least in retrospect. But such behaviors have gone almost completely unnoticed in the literature, much less actively sensed by the tablets and pens that we use — or even leveraged to produce more natural user interfaces that can adapt to exactly how the user is currently handing and using their devices.

If we look deeper into these writing and tucking behaviors alone, a whole set of grips and postures of the hand emerge:

Core Pen Grips

A simple design space of common pen grips and poses (postures of the hand) in pen and touch computing with tablets.

Looking even more deeply, once we have tablets that support a pen as well as full multi-touch, users naturally want to used their bare fingers on the screen in combination with the pen, so we see another range of manual behaviors that we call extension grips based on placing one (or more) fingers on the screen while holding the pen:

Single Finger Extension Grips for Touch Gestures with Pen-in-hand

Much richness in “extension” grips, where touch is used while the pen is still being held, can also be observed. Here we see various single-finger extension grips for the tuck vs. the palm style of stowing the pen.

People also exhibited more ways of using multiple fingers on the touchscreen that I expected:

Multiple Finger Extension Grips for Touch Gestures with Pen-in-hand

Likewise, people extend multiple fingers while holding the pen to pinch or otherwise interact with the touchscreen.

So, it began to dawn on us that there was all this untapped richness in terms of how people hold, manipulate, write on, and extend fingers when using pen and touch on tablets.

And that sensing this could enable some very interesting new possibilities for the user interfaces for stylus + tablet computing.

This is where our custom hardware came in.

On our pen, for example, we can sense subtle motions — using full 3D inertial sensors including accelerometer, gyroscope, and magnetometer — as well as sense how the user grips the pen — this time using a flexible capacitive substrate wrapped around the entire barrel of the pen.

These capabilities then give rise to sensor signals such as the following:

Grip and motion sensors on the stylus
Sensor signals for the pen’s capacitive grip sensor with the writing grip (left) vs. the tuck grip (middle). Exemplar motion signals are shown on the right.

This makes various pen grips and motions stand out quite distinctly, states that we can identify using some simple gesture recognition techniques.

Armed with these capabilities, we explored presenting a number of context-appropriate tools.

As the very simplest example, we can detect when you’re holding the pen in a grip (and posture) that indicates that you’re about to write. Why does this matter? Well, if the touchscreen responds when you plant your meaty palm on it, it causes no end of mischief in a touch-driven user interface. You’ll hit things by accident. Fire off gestures by mistake. Leave little “ink turds” (as we affectionately call them) on the screen if the application responds to touch by leaving an ink trace. But once we can sense it’s your palm, we can go a long ways towards solving these problems with pen-and-touch interaction.

To pull the next little rabbit out of my hat, if you tap the screen with the pen in hand, the pen tools (what else?) pop up:

Pen tools appear

Tools specific to the pen appear when the user taps on the screen with the pen stowed in hand.

But we can take this even further, such as to distinguish bare-handed touches — to support the standard panning and zooming behaviors —  versus a pinch articulated with the pen-in-hand, which in this example brings up a magnifying glass particularly suited to detail work using the pen:

Pen Grip + Motion example: Full canvas zoom vs. Magnifier tool

A pinch multi-touch gesture with the left hand pans and zooms. But a pinch articulated with the pen-in-hand brings up a magnifier tool for doing fine editing work.

Another really fun way to use the sensors — since we can sense the 3D orientation of the pen even when it is away from the screen — is to turn it into a digital airbrush:

Airbrush tool using the sensors

Airbrushing with a pen. Note that the conic section of the resulting “spray” depends on the 3D orientation of the pen — just as it would with a real airbrush.

At any rate, it was a really fun project that garnered a best paper award,  and a fair bit of press coverage (Gizmodo, Engadget, & named FastCo Design’s #2 User Interface innovation of 2014, among other coverage). It’s pretty hard to top that.

Unless maybe we do a lot more with all kinds of cool sensors on the tablet as well.

Hmmm…

You might just want to stay tuned here. There’s all kinds of great stuff in the works, as always (grin).


Sensing Pen & Tablet Grip+Motion thumbnailHinckley, K., Pahud, M., Benko, H., Irani, P., Guimbretiere, F., Gavriliu, M., Chen, X., Matulic, F., Buxton, B., Wilson, A., Sensing Techniques for Tablet+Stylus Interaction.  In the 27th ACM Symposium on User Interface Software and Technology (UIST’14)  Honolulu, Hawaii, Oct 5-8, 2014, pp. 605-614. http://dx.doi.org/10.1145/2642918.2647379

Watch Context Sensing Techniques for Tablet+Stylus Interaction video on YouTube

Book Chapter: Input/Output Devices and Interaction Techniques, Third Edition

Thumbnail for Computing Handbook (3rd Edition)Hinckley, K., Jacob, R., Ware, C. Wobbrock, J., and Wigdor, D., Input/Output Devices and Interaction Techniques. Appears as Chapter 21 in The Computing Handbook, Third Edition: Two-Volume Set, ed. by Tucker, A., Gonzalez, T., Topi, H., and Diaz-Herrera, J. Published by Chapman and Hall/CRC (Taylor & Francis), May 13, 2014.  [PDF – Author’s Draft – may contain discrepancies]

Invited Talk: WIPTTE 2015 Presentation of Sensing Techniques for Tablets, Pen, and Touch

The organizers of WIPTTE 2015, the Workshop on the Impact of Pen and Touch Technology on Education, kindly invited me to speak about my recent work on sensing techniques for stylus + tablet interaction.

One of the key points that I emphasized:

To design technology to fully take advantage of human skills, it is critical to observe what people do with their hands when they are engaged in manual activites such as handwriting.

Notice my deliberate the use of the plural, hands, as in both of ’em, in a division of labor that is a perfect example of cooperative bimanual action.

The power of crayon and touch.

My six-year-old daughter demonstrates the power of crayon and touch technology.

And of course I had my usual array of stupid sensor tricks to illustrate the many ways that sensing systems of the future embedded in tablets and pens could take advantage of such observations. Some of these possible uses for sensors probably seem fanciful, in this antiquated era of circa 2015.

But in eerily similar fashion, some of the earliest work that I did on sensors embedded in handheld devices also felt completely out-of-step with the times when I published it back in the year 2000. A time so backwards it already belongs to the last millennium for goodness sakes!

Now aspects of that work are embedded in practically every mobile device on the planet.

It was a fun talk, with an engaged audience of educators who are eager to see pen and tablet technology advance to better serve the educational needs of students all over the world. I have three kids of school age now so this stuff matters to me. And I love speaking to this audience because they always get so excited to see the pen and touch interaction concepts I have explored over the years, as well as the new technologies emerging from the dim fog that surrounds the leading frontiers of research.

Harold and the Purple Crayon book coverI am a strong believer in the dictum that the best way to predict the future is to invent it.

And the pen may be the single greatest tool ever invented to harness the immense creative power of the human mind, and thereby to scrawl out–perhaps even in the just-in-time fashion of the famous book Harold and the Purple Crayon–the uncertain path that leads us forward.

                    * * *

Update: I have also made the original technical paper and demonstration video available now.

If you are an educator seeing impacts of pen, tablet, and touch technology in the classroom, then I strongly encourage you to start organizing and writing up your observations for next year’s workshop. The 2016 edition of the series, (now renamed CPTTE) will be held at Brown University in Providence, Rhode Island, and chaired by none other than the esteemed Andries Van Dam, who is my academic grandfather (i.e. my Ph.D. advisor’s mentor) and of course widely respected in computing circles throughout the world.

Thumbnail - WIPTTE 2015 invited TalkHinckley, K., WIPTTE 2015 Invited Talk: Sensing Techniques for Tablet + Stylus Interaction. Workshop on the Impact of Pen and Touch Technology on Education, Redmond, WA, April 28th, 2015. [Slides (.pptx)] [Slides PDF]

 

Project: Bimanual In-Place Commands

Here’s another interesting loose end, this one from 2012, which describes a user interface known as “In-Place Commands” that Michel Pahud, myself, and Bill Buxton developed for a range of direct-touch form factors, including everything from tablets and tabletops all the way up to electronic whiteboards a la the modern Microsoft Surface Hub devices of 2015.

Microsoft is currently running a Request for Proposals for Surface Hub research, by the way, so check it out if that sort of thing is at all up your alley. If your proposal is selected you’ll get a spiffy new Surface Hub and $25,000 to go along with it.

We’ve never written up a formal paper on our In-Place Commands work, in part because there is still much to do and we intend to pursue it further when the time is right. But in the meantime the following post and video documenting the work may be of interest to aficionados of efficient interaction on such devices. This also relates closely to the Finger Shadow and Accordion Menu explored in our Pen +Touch work, documented here and here, which collectively form a class of such techniques.

While we wouldn’t claim that any one of these represent the ultimate approach to command and control for direct input, in sum they illustrate many of the underlying issues, the rich set of capabilities we strive to support, and possible directions for future embellishments as well.

Thumbnail for In-Place CommandsKnies, R. In-Place: Interacting with Large Displays. Reporting on research by Pahud, M., Hinckley, K., and Buxton, B. TechNet Inside Microsoft Research Blog Post, Oct 4th, 2012. [Author’s cached copy of post as PDF] [Video MP4] [Watch on YouTube]

In-Place Commands Screen Shot

The user can call up commands in-place, directly where he is working, by touching both fingers down and fanning out the available tool palettes. Many of the functions thus revealed act as click-through tools, where the user may simultaneously select and apply the selected tool — as the user is about to do for the line-drawing tool in the image above.

Watch Bimanual In-Place Commands video on YouTube